Nov 21 13:32:01 crc systemd[1]: Starting Kubernetes Kubelet... Nov 21 13:32:01 crc restorecon[4674]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:01 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:02 crc restorecon[4674]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:02 crc restorecon[4674]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 21 13:32:04 crc kubenswrapper[4675]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 13:32:04 crc kubenswrapper[4675]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 21 13:32:04 crc kubenswrapper[4675]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 13:32:04 crc kubenswrapper[4675]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 13:32:04 crc kubenswrapper[4675]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 21 13:32:04 crc kubenswrapper[4675]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.341199 4675 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358628 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358666 4675 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358704 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358728 4675 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358742 4675 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358754 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358766 4675 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358778 4675 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358790 4675 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358801 4675 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358812 4675 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358823 4675 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358834 4675 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358845 4675 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358861 4675 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358874 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358888 4675 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358901 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358914 4675 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358928 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358939 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358949 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358961 4675 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358973 4675 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358986 4675 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.358997 4675 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359013 4675 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359024 4675 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359035 4675 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359046 4675 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359057 4675 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359110 4675 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359122 4675 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359138 4675 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359153 4675 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359165 4675 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359179 4675 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359194 4675 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359213 4675 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359226 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359236 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359247 4675 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359258 4675 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359269 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359280 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359290 4675 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359301 4675 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359313 4675 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359324 4675 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359335 4675 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359346 4675 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359357 4675 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359367 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359378 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359389 4675 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359400 4675 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359415 4675 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359429 4675 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359441 4675 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359452 4675 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359464 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359478 4675 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359489 4675 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359499 4675 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359510 4675 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359522 4675 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359533 4675 feature_gate.go:330] unrecognized feature gate: Example Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359545 4675 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359559 4675 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359569 4675 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.359581 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.359783 4675 flags.go:64] FLAG: --address="0.0.0.0" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.359808 4675 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.359835 4675 flags.go:64] FLAG: --anonymous-auth="true" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.359853 4675 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.359871 4675 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.359885 4675 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.359902 4675 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.359918 4675 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.359931 4675 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.359944 4675 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.359959 4675 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.359973 4675 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.359985 4675 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.359998 4675 flags.go:64] FLAG: --cgroup-root="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360010 4675 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360024 4675 flags.go:64] FLAG: --client-ca-file="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360036 4675 flags.go:64] FLAG: --cloud-config="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360048 4675 flags.go:64] FLAG: --cloud-provider="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360060 4675 flags.go:64] FLAG: --cluster-dns="[]" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360116 4675 flags.go:64] FLAG: --cluster-domain="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360128 4675 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360140 4675 flags.go:64] FLAG: --config-dir="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360151 4675 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360164 4675 flags.go:64] FLAG: --container-log-max-files="5" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360179 4675 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360191 4675 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360203 4675 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360216 4675 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360228 4675 flags.go:64] FLAG: --contention-profiling="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360240 4675 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360252 4675 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360265 4675 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360277 4675 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360291 4675 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360303 4675 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360314 4675 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360326 4675 flags.go:64] FLAG: --enable-load-reader="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360337 4675 flags.go:64] FLAG: --enable-server="true" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360349 4675 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360365 4675 flags.go:64] FLAG: --event-burst="100" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360378 4675 flags.go:64] FLAG: --event-qps="50" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360391 4675 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360403 4675 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360415 4675 flags.go:64] FLAG: --eviction-hard="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360429 4675 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360440 4675 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360452 4675 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360464 4675 flags.go:64] FLAG: --eviction-soft="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360476 4675 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360487 4675 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360499 4675 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360510 4675 flags.go:64] FLAG: --experimental-mounter-path="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360523 4675 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360535 4675 flags.go:64] FLAG: --fail-swap-on="true" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360546 4675 flags.go:64] FLAG: --feature-gates="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360560 4675 flags.go:64] FLAG: --file-check-frequency="20s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360571 4675 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360584 4675 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360596 4675 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360608 4675 flags.go:64] FLAG: --healthz-port="10248" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360621 4675 flags.go:64] FLAG: --help="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360633 4675 flags.go:64] FLAG: --hostname-override="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360645 4675 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360657 4675 flags.go:64] FLAG: --http-check-frequency="20s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360669 4675 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360682 4675 flags.go:64] FLAG: --image-credential-provider-config="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360693 4675 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360706 4675 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360717 4675 flags.go:64] FLAG: --image-service-endpoint="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360729 4675 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360741 4675 flags.go:64] FLAG: --kube-api-burst="100" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360753 4675 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360778 4675 flags.go:64] FLAG: --kube-api-qps="50" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360796 4675 flags.go:64] FLAG: --kube-reserved="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360809 4675 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360824 4675 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360836 4675 flags.go:64] FLAG: --kubelet-cgroups="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360847 4675 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360859 4675 flags.go:64] FLAG: --lock-file="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360870 4675 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360881 4675 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360893 4675 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360910 4675 flags.go:64] FLAG: --log-json-split-stream="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360922 4675 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360933 4675 flags.go:64] FLAG: --log-text-split-stream="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360945 4675 flags.go:64] FLAG: --logging-format="text" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360956 4675 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360967 4675 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360979 4675 flags.go:64] FLAG: --manifest-url="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.360990 4675 flags.go:64] FLAG: --manifest-url-header="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361004 4675 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361015 4675 flags.go:64] FLAG: --max-open-files="1000000" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361030 4675 flags.go:64] FLAG: --max-pods="110" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361042 4675 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361054 4675 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361130 4675 flags.go:64] FLAG: --memory-manager-policy="None" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361143 4675 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361155 4675 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361166 4675 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361178 4675 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361202 4675 flags.go:64] FLAG: --node-status-max-images="50" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361213 4675 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361224 4675 flags.go:64] FLAG: --oom-score-adj="-999" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361235 4675 flags.go:64] FLAG: --pod-cidr="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361246 4675 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361262 4675 flags.go:64] FLAG: --pod-manifest-path="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361273 4675 flags.go:64] FLAG: --pod-max-pids="-1" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361284 4675 flags.go:64] FLAG: --pods-per-core="0" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361296 4675 flags.go:64] FLAG: --port="10250" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361309 4675 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361321 4675 flags.go:64] FLAG: --provider-id="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361335 4675 flags.go:64] FLAG: --qos-reserved="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361347 4675 flags.go:64] FLAG: --read-only-port="10255" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361360 4675 flags.go:64] FLAG: --register-node="true" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361373 4675 flags.go:64] FLAG: --register-schedulable="true" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361386 4675 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361407 4675 flags.go:64] FLAG: --registry-burst="10" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361419 4675 flags.go:64] FLAG: --registry-qps="5" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361431 4675 flags.go:64] FLAG: --reserved-cpus="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361443 4675 flags.go:64] FLAG: --reserved-memory="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361458 4675 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361470 4675 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361482 4675 flags.go:64] FLAG: --rotate-certificates="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361494 4675 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361506 4675 flags.go:64] FLAG: --runonce="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361518 4675 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361532 4675 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361545 4675 flags.go:64] FLAG: --seccomp-default="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361558 4675 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361571 4675 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361584 4675 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361596 4675 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361609 4675 flags.go:64] FLAG: --storage-driver-password="root" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361620 4675 flags.go:64] FLAG: --storage-driver-secure="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361634 4675 flags.go:64] FLAG: --storage-driver-table="stats" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361646 4675 flags.go:64] FLAG: --storage-driver-user="root" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361667 4675 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361704 4675 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361718 4675 flags.go:64] FLAG: --system-cgroups="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361729 4675 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361750 4675 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361762 4675 flags.go:64] FLAG: --tls-cert-file="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361774 4675 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361791 4675 flags.go:64] FLAG: --tls-min-version="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361803 4675 flags.go:64] FLAG: --tls-private-key-file="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361816 4675 flags.go:64] FLAG: --topology-manager-policy="none" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361827 4675 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361841 4675 flags.go:64] FLAG: --topology-manager-scope="container" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361853 4675 flags.go:64] FLAG: --v="2" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361870 4675 flags.go:64] FLAG: --version="false" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361886 4675 flags.go:64] FLAG: --vmodule="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361900 4675 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.361913 4675 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362226 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362244 4675 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362256 4675 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362267 4675 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362280 4675 feature_gate.go:330] unrecognized feature gate: Example Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362295 4675 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362310 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362321 4675 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362332 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362343 4675 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362354 4675 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362365 4675 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362376 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362387 4675 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362398 4675 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362416 4675 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362426 4675 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362437 4675 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362453 4675 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362465 4675 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362474 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362484 4675 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362493 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362501 4675 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362510 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362518 4675 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362527 4675 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362535 4675 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362544 4675 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362552 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362563 4675 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362571 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362580 4675 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362589 4675 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362597 4675 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362606 4675 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362621 4675 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362632 4675 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362644 4675 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362655 4675 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362665 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362673 4675 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362682 4675 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362690 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362699 4675 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362707 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362716 4675 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362727 4675 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362735 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362744 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362753 4675 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362762 4675 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362770 4675 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362778 4675 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362787 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362796 4675 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362804 4675 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362813 4675 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362821 4675 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362830 4675 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362839 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362847 4675 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362855 4675 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362864 4675 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362872 4675 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362881 4675 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362891 4675 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362899 4675 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362911 4675 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362919 4675 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.362928 4675 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.362956 4675 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.379888 4675 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.379933 4675 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380223 4675 feature_gate.go:330] unrecognized feature gate: Example Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380622 4675 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380643 4675 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380650 4675 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380657 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380663 4675 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380668 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380675 4675 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380680 4675 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380686 4675 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380691 4675 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380696 4675 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380701 4675 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380706 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380711 4675 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380716 4675 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380721 4675 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380726 4675 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380732 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380737 4675 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380742 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380747 4675 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380752 4675 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380757 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380762 4675 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380767 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380772 4675 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380778 4675 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380782 4675 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380787 4675 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380793 4675 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380805 4675 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380810 4675 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380815 4675 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380820 4675 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380825 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380830 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380835 4675 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380842 4675 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380848 4675 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380854 4675 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380859 4675 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380864 4675 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380870 4675 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380875 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380881 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380887 4675 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380894 4675 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380901 4675 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380906 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380912 4675 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380917 4675 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380922 4675 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380927 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380933 4675 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380937 4675 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380943 4675 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380948 4675 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380953 4675 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380959 4675 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380965 4675 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380970 4675 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380976 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380982 4675 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380987 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380992 4675 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.380997 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381002 4675 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381007 4675 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381011 4675 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381016 4675 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.381026 4675 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381362 4675 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381376 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381381 4675 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381386 4675 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381391 4675 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381396 4675 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381401 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381406 4675 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381411 4675 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381416 4675 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381420 4675 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381426 4675 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381430 4675 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381435 4675 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381440 4675 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381445 4675 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381450 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381455 4675 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381461 4675 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381468 4675 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381474 4675 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381480 4675 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381487 4675 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381494 4675 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381501 4675 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381507 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381513 4675 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381520 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381526 4675 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381532 4675 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381539 4675 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381545 4675 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381552 4675 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381560 4675 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381566 4675 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381571 4675 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381576 4675 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381581 4675 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381586 4675 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381591 4675 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381595 4675 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381600 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381605 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381610 4675 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381615 4675 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381620 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381625 4675 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381630 4675 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381634 4675 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381641 4675 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381647 4675 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381655 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381661 4675 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381691 4675 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381699 4675 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381707 4675 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381715 4675 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381722 4675 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381729 4675 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381734 4675 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381742 4675 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381749 4675 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381755 4675 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381763 4675 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381770 4675 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381776 4675 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381783 4675 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381788 4675 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381794 4675 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381800 4675 feature_gate.go:330] unrecognized feature gate: Example Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.381806 4675 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.381816 4675 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.382024 4675 server.go:940] "Client rotation is on, will bootstrap in background" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.388409 4675 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.388517 4675 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.390349 4675 server.go:997] "Starting client certificate rotation" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.390390 4675 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.390663 4675 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-29 03:53:41.563564114 +0000 UTC Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.390829 4675 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 182h21m37.172743427s for next certificate rotation Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.522957 4675 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.525203 4675 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.593183 4675 log.go:25] "Validated CRI v1 runtime API" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.677807 4675 log.go:25] "Validated CRI v1 image API" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.681058 4675 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.687858 4675 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-21-13-27-19-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.687936 4675 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.720723 4675 manager.go:217] Machine: {Timestamp:2025-11-21 13:32:04.715408325 +0000 UTC m=+1.441823062 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8 BootID:7b9b045a-ab24-4730-a701-b9ff89571936 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:04:fb:10 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:04:fb:10 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f4:56:48 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:7c:0a:29 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:6b:91:fa Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:16:33:0b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3e:21:27:ea:ae:c2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:32:d4:dc:46:be:ff Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.721141 4675 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.721372 4675 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.723285 4675 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.723609 4675 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.723669 4675 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.723935 4675 topology_manager.go:138] "Creating topology manager with none policy" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.723949 4675 container_manager_linux.go:303] "Creating device plugin manager" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.724526 4675 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.724572 4675 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.724868 4675 state_mem.go:36] "Initialized new in-memory state store" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.725523 4675 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.732448 4675 kubelet.go:418] "Attempting to sync node with API server" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.732475 4675 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.732495 4675 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.732511 4675 kubelet.go:324] "Adding apiserver pod source" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.732526 4675 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.749815 4675 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.749837 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:04 crc kubenswrapper[4675]: E1121 13:32:04.749934 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.749901 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:04 crc kubenswrapper[4675]: E1121 13:32:04.750043 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.751029 4675 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.754154 4675 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.755757 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.755778 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.755785 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.755792 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.755803 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.755810 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.755818 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.755835 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.755844 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.755851 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.755862 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.755868 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.756700 4675 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.757295 4675 server.go:1280] "Started kubelet" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.757460 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.757489 4675 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.757706 4675 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.758817 4675 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 21 13:32:04 crc systemd[1]: Started Kubernetes Kubelet. Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.762398 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.762459 4675 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.762616 4675 server.go:460] "Adding debug handlers to kubelet server" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.762713 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 05:11:50.480302225 +0000 UTC Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.762781 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1071h39m45.717523766s for next certificate rotation Nov 21 13:32:04 crc kubenswrapper[4675]: E1121 13:32:04.763240 4675 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.763805 4675 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.763838 4675 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.763976 4675 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.764976 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:04 crc kubenswrapper[4675]: E1121 13:32:04.765048 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.765657 4675 factory.go:55] Registering systemd factory Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.765689 4675 factory.go:221] Registration of the systemd container factory successfully Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.766034 4675 factory.go:153] Registering CRI-O factory Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.766146 4675 factory.go:221] Registration of the crio container factory successfully Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.766264 4675 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.766348 4675 factory.go:103] Registering Raw factory Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.766416 4675 manager.go:1196] Started watching for new ooms in manager Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.767003 4675 manager.go:319] Starting recovery of all containers Nov 21 13:32:04 crc kubenswrapper[4675]: E1121 13:32:04.778245 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="200ms" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.811307 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.811546 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.811637 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.811707 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.811792 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.811865 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.811938 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.812016 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.812118 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.812197 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.812505 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.812586 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.812660 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.812738 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.812819 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.812902 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.812979 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.813059 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.813174 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.813263 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.813357 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.813467 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.813562 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.813705 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.813786 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.813896 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.813996 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814098 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814171 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814234 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814290 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814544 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814600 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: E1121 13:32:04.809553 4675 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a08d7462c355f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-21 13:32:04.757263711 +0000 UTC m=+1.483678438,LastTimestamp:2025-11-21 13:32:04.757263711 +0000 UTC m=+1.483678438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814653 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814799 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814852 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814869 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814886 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814901 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814922 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814937 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814951 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814964 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814977 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.814995 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.815010 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.815024 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.815042 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.815057 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817436 4675 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817473 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817491 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817505 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817524 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817541 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817556 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817604 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817621 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817635 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817648 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817664 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817677 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817691 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817703 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817718 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817732 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817745 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817758 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817770 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817783 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817797 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817811 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817823 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817838 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817851 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817865 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817878 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817892 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.817905 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818008 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818024 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818039 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818051 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818089 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818105 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818120 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818134 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818176 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818203 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818216 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818229 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818243 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818256 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818277 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818293 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818306 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818318 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818330 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818341 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818352 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818364 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818376 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818388 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818401 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818412 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818430 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818444 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818456 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818469 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818481 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818493 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818505 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818517 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818529 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818541 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818554 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818567 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818579 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818591 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818603 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818613 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818624 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818636 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818647 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818659 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818670 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818681 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818692 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818703 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818714 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818725 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818736 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818746 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818756 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818773 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818784 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818794 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818805 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818815 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818827 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818840 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818852 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818863 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818876 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818892 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818903 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818915 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818926 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818938 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818950 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818963 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818979 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.818993 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819004 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819013 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819023 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819034 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819045 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819058 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819089 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819103 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819113 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819125 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819133 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819143 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819153 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819166 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819178 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819190 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819203 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819214 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819226 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819238 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819251 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819262 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819274 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819285 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819295 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819309 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819321 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819331 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819370 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819383 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819395 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819405 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819415 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819429 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819438 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819446 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819455 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819463 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819473 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819483 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819495 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819507 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819519 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819531 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819545 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819560 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819575 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819587 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819600 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819611 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819623 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819635 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819647 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819657 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819669 4675 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819680 4675 reconstruct.go:97] "Volume reconstruction finished" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.819687 4675 reconciler.go:26] "Reconciler: start to sync state" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.836699 4675 manager.go:324] Recovery completed Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.845910 4675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.847417 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.847645 4675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.847681 4675 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.847699 4675 kubelet.go:2335] "Starting kubelet main sync loop" Nov 21 13:32:04 crc kubenswrapper[4675]: E1121 13:32:04.847739 4675 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 21 13:32:04 crc kubenswrapper[4675]: W1121 13:32:04.848422 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:04 crc kubenswrapper[4675]: E1121 13:32:04.848485 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.848732 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.848761 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.848770 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.849735 4675 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.849748 4675 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 21 13:32:04 crc kubenswrapper[4675]: I1121 13:32:04.849765 4675 state_mem.go:36] "Initialized new in-memory state store" Nov 21 13:32:04 crc kubenswrapper[4675]: E1121 13:32:04.863604 4675 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 21 13:32:04 crc kubenswrapper[4675]: E1121 13:32:04.948652 4675 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 21 13:32:04 crc kubenswrapper[4675]: E1121 13:32:04.963898 4675 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 21 13:32:04 crc kubenswrapper[4675]: E1121 13:32:04.980138 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="400ms" Nov 21 13:32:05 crc kubenswrapper[4675]: E1121 13:32:05.065263 4675 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 21 13:32:05 crc kubenswrapper[4675]: E1121 13:32:05.149333 4675 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 21 13:32:05 crc kubenswrapper[4675]: E1121 13:32:05.165861 4675 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.236964 4675 policy_none.go:49] "None policy: Start" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.238822 4675 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.238867 4675 state_mem.go:35] "Initializing new in-memory state store" Nov 21 13:32:05 crc kubenswrapper[4675]: E1121 13:32:05.266034 4675 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 21 13:32:05 crc kubenswrapper[4675]: E1121 13:32:05.366213 4675 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 21 13:32:05 crc kubenswrapper[4675]: E1121 13:32:05.381124 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="800ms" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.446377 4675 manager.go:334] "Starting Device Plugin manager" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.446441 4675 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.446455 4675 server.go:79] "Starting device plugin registration server" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.446938 4675 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.446958 4675 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.447478 4675 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.447634 4675 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.447644 4675 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 21 13:32:05 crc kubenswrapper[4675]: E1121 13:32:05.457810 4675 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.547446 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.549526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.549575 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.549599 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.549492 4675 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.549633 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.549721 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:05 crc kubenswrapper[4675]: E1121 13:32:05.550403 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.551197 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.551328 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.551440 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.551714 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.552375 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.552534 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.553278 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.553318 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.553335 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.553589 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.553768 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.553853 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.553997 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.554146 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.554255 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.555010 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.555050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.555156 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.555169 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.555189 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.555207 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.555354 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.555492 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.555544 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.556600 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.556633 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.556659 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.556665 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.556674 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.556695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.557226 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.557284 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.557652 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.558779 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.558971 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.559139 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.559202 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.559334 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.559355 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.559570 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.559633 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.560724 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.560783 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.560806 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629551 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629607 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629645 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629679 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629708 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629733 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629756 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629779 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629803 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629823 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629890 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629934 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629955 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629973 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.629994 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.731584 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.731688 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.731718 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.731790 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.731825 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.731890 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.731913 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.731914 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.731946 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732001 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732026 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.731962 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732147 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732172 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732213 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732249 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732274 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732329 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732330 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732371 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732393 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732435 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732445 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732450 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732469 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732546 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732571 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732596 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732716 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.732728 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.751132 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.753190 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.753255 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.753274 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.753331 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:05 crc kubenswrapper[4675]: E1121 13:32:05.753959 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.759683 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:05 crc kubenswrapper[4675]: W1121 13:32:05.795913 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:05 crc kubenswrapper[4675]: E1121 13:32:05.796128 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:05 crc kubenswrapper[4675]: W1121 13:32:05.847030 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:05 crc kubenswrapper[4675]: E1121 13:32:05.847211 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.884671 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.892992 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.910358 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.930149 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: I1121 13:32:05.941352 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:05 crc kubenswrapper[4675]: W1121 13:32:05.941834 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:05 crc kubenswrapper[4675]: E1121 13:32:05.942392 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:06 crc kubenswrapper[4675]: W1121 13:32:06.018132 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:06 crc kubenswrapper[4675]: E1121 13:32:06.018241 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:06 crc kubenswrapper[4675]: W1121 13:32:06.063402 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e9895a8b2e99c8cccecd25e8fe1baec9c27182e2d81a3079666e1329f2ba3a19 WatchSource:0}: Error finding container e9895a8b2e99c8cccecd25e8fe1baec9c27182e2d81a3079666e1329f2ba3a19: Status 404 returned error can't find the container with id e9895a8b2e99c8cccecd25e8fe1baec9c27182e2d81a3079666e1329f2ba3a19 Nov 21 13:32:06 crc kubenswrapper[4675]: W1121 13:32:06.066303 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-64cd53c5a2b6bab61a2de05bb50c672b6754385148bfdfc415f850883268efe3 WatchSource:0}: Error finding container 64cd53c5a2b6bab61a2de05bb50c672b6754385148bfdfc415f850883268efe3: Status 404 returned error can't find the container with id 64cd53c5a2b6bab61a2de05bb50c672b6754385148bfdfc415f850883268efe3 Nov 21 13:32:06 crc kubenswrapper[4675]: W1121 13:32:06.072355 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-21732e7f426054d4f033e441afabf449b3703d8675e7f5a443c3481de3b0a6cf WatchSource:0}: Error finding container 21732e7f426054d4f033e441afabf449b3703d8675e7f5a443c3481de3b0a6cf: Status 404 returned error can't find the container with id 21732e7f426054d4f033e441afabf449b3703d8675e7f5a443c3481de3b0a6cf Nov 21 13:32:06 crc kubenswrapper[4675]: W1121 13:32:06.075399 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-4e9961b33bae54308c91d57be92d8f59b611d1e9a7e64d9b0000139c1f5f58a2 WatchSource:0}: Error finding container 4e9961b33bae54308c91d57be92d8f59b611d1e9a7e64d9b0000139c1f5f58a2: Status 404 returned error can't find the container with id 4e9961b33bae54308c91d57be92d8f59b611d1e9a7e64d9b0000139c1f5f58a2 Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.154259 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.155757 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.155806 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.155822 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.155856 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:06 crc kubenswrapper[4675]: E1121 13:32:06.156488 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Nov 21 13:32:06 crc kubenswrapper[4675]: E1121 13:32:06.182717 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="1.6s" Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.759366 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.854740 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e9895a8b2e99c8cccecd25e8fe1baec9c27182e2d81a3079666e1329f2ba3a19"} Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.856177 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"34825dcf105bf3ec7c02dcba8324b85e37c6c12429b5b4f2daef9b1fa73a7b05"} Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.857450 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e9961b33bae54308c91d57be92d8f59b611d1e9a7e64d9b0000139c1f5f58a2"} Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.858748 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"21732e7f426054d4f033e441afabf449b3703d8675e7f5a443c3481de3b0a6cf"} Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.860186 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"64cd53c5a2b6bab61a2de05bb50c672b6754385148bfdfc415f850883268efe3"} Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.957157 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.959680 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.959718 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.959730 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4675]: I1121 13:32:06.959755 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:06 crc kubenswrapper[4675]: E1121 13:32:06.960267 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Nov 21 13:32:07 crc kubenswrapper[4675]: W1121 13:32:07.738781 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:07 crc kubenswrapper[4675]: E1121 13:32:07.738898 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:07 crc kubenswrapper[4675]: I1121 13:32:07.759611 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:07 crc kubenswrapper[4675]: E1121 13:32:07.783458 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="3.2s" Nov 21 13:32:07 crc kubenswrapper[4675]: W1121 13:32:07.800359 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:07 crc kubenswrapper[4675]: E1121 13:32:07.800500 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:08 crc kubenswrapper[4675]: W1121 13:32:08.026629 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:08 crc kubenswrapper[4675]: E1121 13:32:08.026714 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:08 crc kubenswrapper[4675]: I1121 13:32:08.560684 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:08 crc kubenswrapper[4675]: I1121 13:32:08.562951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:08 crc kubenswrapper[4675]: I1121 13:32:08.562995 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:08 crc kubenswrapper[4675]: I1121 13:32:08.563007 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:08 crc kubenswrapper[4675]: I1121 13:32:08.563032 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:08 crc kubenswrapper[4675]: E1121 13:32:08.563800 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Nov 21 13:32:08 crc kubenswrapper[4675]: I1121 13:32:08.760059 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:09 crc kubenswrapper[4675]: W1121 13:32:09.018585 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:09 crc kubenswrapper[4675]: E1121 13:32:09.018758 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.759824 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.868727 4675 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97" exitCode=0 Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.868826 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97"} Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.868969 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.869970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.870012 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.870024 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.871051 4675 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c" exitCode=0 Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.871110 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c"} Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.871189 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.871800 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.871822 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.871833 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.877119 4675 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd" exitCode=0 Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.877188 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.877194 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd"} Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.878195 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.878256 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.878276 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.878800 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6" exitCode=0 Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.878890 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6"} Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.878965 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.879952 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.879980 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.879994 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.880507 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b"} Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.881822 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.882669 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.882705 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:09 crc kubenswrapper[4675]: I1121 13:32:09.882723 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:10 crc kubenswrapper[4675]: I1121 13:32:10.759779 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:10 crc kubenswrapper[4675]: E1121 13:32:10.984351 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="6.4s" Nov 21 13:32:11 crc kubenswrapper[4675]: W1121 13:32:11.301841 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:11 crc kubenswrapper[4675]: E1121 13:32:11.301947 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:11 crc kubenswrapper[4675]: W1121 13:32:11.492892 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:11 crc kubenswrapper[4675]: E1121 13:32:11.493021 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:11 crc kubenswrapper[4675]: I1121 13:32:11.759891 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:11 crc kubenswrapper[4675]: I1121 13:32:11.764935 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:11 crc kubenswrapper[4675]: I1121 13:32:11.766719 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:11 crc kubenswrapper[4675]: I1121 13:32:11.766749 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:11 crc kubenswrapper[4675]: I1121 13:32:11.766758 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:11 crc kubenswrapper[4675]: I1121 13:32:11.766779 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:11 crc kubenswrapper[4675]: E1121 13:32:11.767185 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Nov 21 13:32:11 crc kubenswrapper[4675]: I1121 13:32:11.885937 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf"} Nov 21 13:32:11 crc kubenswrapper[4675]: I1121 13:32:11.887559 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf"} Nov 21 13:32:12 crc kubenswrapper[4675]: W1121 13:32:12.334855 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:12 crc kubenswrapper[4675]: E1121 13:32:12.334960 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:12 crc kubenswrapper[4675]: I1121 13:32:12.759284 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:12 crc kubenswrapper[4675]: I1121 13:32:12.891934 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"aabe363fffb90ca65ad0cac2d28d56ec44596ac7f25e6eb9669a3ba9b6b61369"} Nov 21 13:32:12 crc kubenswrapper[4675]: I1121 13:32:12.894404 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575"} Nov 21 13:32:12 crc kubenswrapper[4675]: I1121 13:32:12.897380 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3"} Nov 21 13:32:12 crc kubenswrapper[4675]: W1121 13:32:12.911251 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:12 crc kubenswrapper[4675]: E1121 13:32:12.911424 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:13 crc kubenswrapper[4675]: I1121 13:32:13.759698 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:13 crc kubenswrapper[4675]: I1121 13:32:13.903845 4675 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3" exitCode=0 Nov 21 13:32:13 crc kubenswrapper[4675]: I1121 13:32:13.903907 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3"} Nov 21 13:32:14 crc kubenswrapper[4675]: E1121 13:32:14.736953 4675 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a08d7462c355f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-21 13:32:04.757263711 +0000 UTC m=+1.483678438,LastTimestamp:2025-11-21 13:32:04.757263711 +0000 UTC m=+1.483678438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 21 13:32:14 crc kubenswrapper[4675]: I1121 13:32:14.758777 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:14 crc kubenswrapper[4675]: I1121 13:32:14.909871 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:14 crc kubenswrapper[4675]: I1121 13:32:14.910056 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:14 crc kubenswrapper[4675]: I1121 13:32:14.911920 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:14 crc kubenswrapper[4675]: I1121 13:32:14.911968 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:14 crc kubenswrapper[4675]: I1121 13:32:14.911977 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:14 crc kubenswrapper[4675]: I1121 13:32:14.911929 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:14 crc kubenswrapper[4675]: I1121 13:32:14.912025 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:14 crc kubenswrapper[4675]: I1121 13:32:14.912046 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:15 crc kubenswrapper[4675]: E1121 13:32:15.458370 4675 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 21 13:32:15 crc kubenswrapper[4675]: I1121 13:32:15.758943 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:15 crc kubenswrapper[4675]: I1121 13:32:15.914827 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c"} Nov 21 13:32:15 crc kubenswrapper[4675]: I1121 13:32:15.917664 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387"} Nov 21 13:32:16 crc kubenswrapper[4675]: I1121 13:32:16.759712 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:17 crc kubenswrapper[4675]: E1121 13:32:17.386026 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="7s" Nov 21 13:32:17 crc kubenswrapper[4675]: I1121 13:32:17.759257 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:17 crc kubenswrapper[4675]: I1121 13:32:17.926854 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78"} Nov 21 13:32:18 crc kubenswrapper[4675]: W1121 13:32:18.092822 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:18 crc kubenswrapper[4675]: E1121 13:32:18.092946 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:18 crc kubenswrapper[4675]: I1121 13:32:18.167601 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:18 crc kubenswrapper[4675]: I1121 13:32:18.169471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:18 crc kubenswrapper[4675]: I1121 13:32:18.169532 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:18 crc kubenswrapper[4675]: I1121 13:32:18.169555 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:18 crc kubenswrapper[4675]: I1121 13:32:18.169600 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:18 crc kubenswrapper[4675]: E1121 13:32:18.170350 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Nov 21 13:32:18 crc kubenswrapper[4675]: I1121 13:32:18.759156 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:18 crc kubenswrapper[4675]: I1121 13:32:18.935228 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6"} Nov 21 13:32:19 crc kubenswrapper[4675]: I1121 13:32:19.759406 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:19 crc kubenswrapper[4675]: I1121 13:32:19.941014 4675 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6" exitCode=0 Nov 21 13:32:19 crc kubenswrapper[4675]: I1121 13:32:19.941104 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6"} Nov 21 13:32:19 crc kubenswrapper[4675]: I1121 13:32:19.944584 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13"} Nov 21 13:32:20 crc kubenswrapper[4675]: I1121 13:32:20.760272 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:20 crc kubenswrapper[4675]: I1121 13:32:20.950209 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05"} Nov 21 13:32:20 crc kubenswrapper[4675]: I1121 13:32:20.950305 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:20 crc kubenswrapper[4675]: I1121 13:32:20.951329 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:20 crc kubenswrapper[4675]: I1121 13:32:20.951358 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:20 crc kubenswrapper[4675]: I1121 13:32:20.951367 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:21 crc kubenswrapper[4675]: I1121 13:32:21.758833 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:21 crc kubenswrapper[4675]: I1121 13:32:21.964502 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114"} Nov 21 13:32:21 crc kubenswrapper[4675]: I1121 13:32:21.964615 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:21 crc kubenswrapper[4675]: I1121 13:32:21.964620 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:21 crc kubenswrapper[4675]: I1121 13:32:21.966551 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:21 crc kubenswrapper[4675]: I1121 13:32:21.966607 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:21 crc kubenswrapper[4675]: I1121 13:32:21.966632 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:21 crc kubenswrapper[4675]: I1121 13:32:21.966565 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:21 crc kubenswrapper[4675]: I1121 13:32:21.966704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:21 crc kubenswrapper[4675]: I1121 13:32:21.966723 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:22 crc kubenswrapper[4675]: W1121 13:32:22.407043 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:22 crc kubenswrapper[4675]: E1121 13:32:22.407183 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:22 crc kubenswrapper[4675]: I1121 13:32:22.504954 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:22 crc kubenswrapper[4675]: I1121 13:32:22.505586 4675 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 21 13:32:22 crc kubenswrapper[4675]: I1121 13:32:22.505679 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 21 13:32:22 crc kubenswrapper[4675]: I1121 13:32:22.759780 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:22 crc kubenswrapper[4675]: W1121 13:32:22.813543 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:22 crc kubenswrapper[4675]: E1121 13:32:22.813627 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:22 crc kubenswrapper[4675]: I1121 13:32:22.977719 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd"} Nov 21 13:32:22 crc kubenswrapper[4675]: I1121 13:32:22.977783 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:22 crc kubenswrapper[4675]: I1121 13:32:22.978816 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:22 crc kubenswrapper[4675]: I1121 13:32:22.978851 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:22 crc kubenswrapper[4675]: I1121 13:32:22.978888 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:23 crc kubenswrapper[4675]: W1121 13:32:23.442103 4675 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:23 crc kubenswrapper[4675]: E1121 13:32:23.442205 4675 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:23 crc kubenswrapper[4675]: I1121 13:32:23.759471 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:23 crc kubenswrapper[4675]: I1121 13:32:23.984466 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147"} Nov 21 13:32:24 crc kubenswrapper[4675]: I1121 13:32:24.316465 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:24 crc kubenswrapper[4675]: I1121 13:32:24.316682 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:24 crc kubenswrapper[4675]: I1121 13:32:24.318462 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:24 crc kubenswrapper[4675]: I1121 13:32:24.318527 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:24 crc kubenswrapper[4675]: I1121 13:32:24.318542 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:24 crc kubenswrapper[4675]: E1121 13:32:24.386863 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="7s" Nov 21 13:32:24 crc kubenswrapper[4675]: E1121 13:32:24.738781 4675 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a08d7462c355f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-21 13:32:04.757263711 +0000 UTC m=+1.483678438,LastTimestamp:2025-11-21 13:32:04.757263711 +0000 UTC m=+1.483678438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 21 13:32:24 crc kubenswrapper[4675]: I1121 13:32:24.759807 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:24 crc kubenswrapper[4675]: I1121 13:32:24.993546 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a"} Nov 21 13:32:25 crc kubenswrapper[4675]: I1121 13:32:25.171421 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:25 crc kubenswrapper[4675]: I1121 13:32:25.172944 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:25 crc kubenswrapper[4675]: I1121 13:32:25.172988 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:25 crc kubenswrapper[4675]: I1121 13:32:25.172999 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:25 crc kubenswrapper[4675]: I1121 13:32:25.173025 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:25 crc kubenswrapper[4675]: E1121 13:32:25.173651 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Nov 21 13:32:25 crc kubenswrapper[4675]: I1121 13:32:25.368656 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:25 crc kubenswrapper[4675]: I1121 13:32:25.368892 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:25 crc kubenswrapper[4675]: I1121 13:32:25.369022 4675 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" start-of-body= Nov 21 13:32:25 crc kubenswrapper[4675]: I1121 13:32:25.369130 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" Nov 21 13:32:25 crc kubenswrapper[4675]: I1121 13:32:25.370540 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:25 crc kubenswrapper[4675]: I1121 13:32:25.370570 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:25 crc kubenswrapper[4675]: I1121 13:32:25.370608 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:25 crc kubenswrapper[4675]: E1121 13:32:25.458527 4675 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 21 13:32:25 crc kubenswrapper[4675]: I1121 13:32:25.758752 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:26 crc kubenswrapper[4675]: I1121 13:32:26.759101 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:26 crc kubenswrapper[4675]: I1121 13:32:26.895724 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:26 crc kubenswrapper[4675]: I1121 13:32:26.896448 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:26 crc kubenswrapper[4675]: I1121 13:32:26.898701 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:26 crc kubenswrapper[4675]: I1121 13:32:26.898946 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:26 crc kubenswrapper[4675]: I1121 13:32:26.899175 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:27 crc kubenswrapper[4675]: I1121 13:32:27.002980 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539"} Nov 21 13:32:27 crc kubenswrapper[4675]: I1121 13:32:27.007843 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2e26dceaa47efcbd1fb4b146329c90f7c97bcaab12c6eb20883da12d95032e0a"} Nov 21 13:32:27 crc kubenswrapper[4675]: I1121 13:32:27.008024 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:27 crc kubenswrapper[4675]: I1121 13:32:27.009534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:27 crc kubenswrapper[4675]: I1121 13:32:27.009591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:27 crc kubenswrapper[4675]: I1121 13:32:27.009614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:27 crc kubenswrapper[4675]: I1121 13:32:27.759574 4675 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 21 13:32:28 crc kubenswrapper[4675]: I1121 13:32:28.016693 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1"} Nov 21 13:32:28 crc kubenswrapper[4675]: I1121 13:32:28.016784 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:28 crc kubenswrapper[4675]: I1121 13:32:28.016901 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:28 crc kubenswrapper[4675]: I1121 13:32:28.018107 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:28 crc kubenswrapper[4675]: I1121 13:32:28.018181 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:28 crc kubenswrapper[4675]: I1121 13:32:28.018203 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.021534 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.023621 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2e26dceaa47efcbd1fb4b146329c90f7c97bcaab12c6eb20883da12d95032e0a" exitCode=255 Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.023682 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2e26dceaa47efcbd1fb4b146329c90f7c97bcaab12c6eb20883da12d95032e0a"} Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.023792 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.025153 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.025186 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.025194 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.025692 4675 scope.go:117] "RemoveContainer" containerID="2e26dceaa47efcbd1fb4b146329c90f7c97bcaab12c6eb20883da12d95032e0a" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.028498 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa"} Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.028641 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.029894 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.029936 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.029957 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.274501 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.274742 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.276444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.276496 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:29 crc kubenswrapper[4675]: I1121 13:32:29.276515 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:30 crc kubenswrapper[4675]: I1121 13:32:30.032807 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 21 13:32:30 crc kubenswrapper[4675]: I1121 13:32:30.037593 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200"} Nov 21 13:32:30 crc kubenswrapper[4675]: I1121 13:32:30.037682 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:30 crc kubenswrapper[4675]: I1121 13:32:30.037782 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:30 crc kubenswrapper[4675]: I1121 13:32:30.038920 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:30 crc kubenswrapper[4675]: I1121 13:32:30.038918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:30 crc kubenswrapper[4675]: I1121 13:32:30.038984 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:30 crc kubenswrapper[4675]: I1121 13:32:30.038998 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:30 crc kubenswrapper[4675]: I1121 13:32:30.038962 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:30 crc kubenswrapper[4675]: I1121 13:32:30.039082 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:30 crc kubenswrapper[4675]: I1121 13:32:30.745766 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:31 crc kubenswrapper[4675]: I1121 13:32:31.014260 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:31 crc kubenswrapper[4675]: I1121 13:32:31.039926 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:31 crc kubenswrapper[4675]: I1121 13:32:31.040030 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:31 crc kubenswrapper[4675]: I1121 13:32:31.041285 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:31 crc kubenswrapper[4675]: I1121 13:32:31.041348 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:31 crc kubenswrapper[4675]: I1121 13:32:31.041360 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:31 crc kubenswrapper[4675]: I1121 13:32:31.576833 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 21 13:32:31 crc kubenswrapper[4675]: I1121 13:32:31.577063 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:31 crc kubenswrapper[4675]: I1121 13:32:31.578246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:31 crc kubenswrapper[4675]: I1121 13:32:31.578278 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:31 crc kubenswrapper[4675]: I1121 13:32:31.578287 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.042810 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.044664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.044733 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.044755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.174090 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.176123 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.176169 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.176185 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.176219 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.513057 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.513222 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.514103 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.514139 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.514154 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:32 crc kubenswrapper[4675]: I1121 13:32:32.517790 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:33 crc kubenswrapper[4675]: I1121 13:32:33.045124 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:33 crc kubenswrapper[4675]: I1121 13:32:33.046584 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:33 crc kubenswrapper[4675]: I1121 13:32:33.046639 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:33 crc kubenswrapper[4675]: I1121 13:32:33.046656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:33 crc kubenswrapper[4675]: I1121 13:32:33.052387 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 21 13:32:33 crc kubenswrapper[4675]: I1121 13:32:33.052568 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:33 crc kubenswrapper[4675]: I1121 13:32:33.053721 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:33 crc kubenswrapper[4675]: I1121 13:32:33.053778 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:33 crc kubenswrapper[4675]: I1121 13:32:33.053801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:35 crc kubenswrapper[4675]: E1121 13:32:35.458655 4675 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 21 13:32:36 crc kubenswrapper[4675]: I1121 13:32:36.181415 4675 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 21 13:32:36 crc kubenswrapper[4675]: I1121 13:32:36.181487 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 21 13:32:36 crc kubenswrapper[4675]: I1121 13:32:36.188569 4675 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 21 13:32:36 crc kubenswrapper[4675]: I1121 13:32:36.188659 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 21 13:32:38 crc kubenswrapper[4675]: I1121 13:32:38.369666 4675 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 13:32:38 crc kubenswrapper[4675]: I1121 13:32:38.369826 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 21 13:32:40 crc kubenswrapper[4675]: I1121 13:32:40.751819 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:40 crc kubenswrapper[4675]: I1121 13:32:40.752298 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:40 crc kubenswrapper[4675]: I1121 13:32:40.752787 4675 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 21 13:32:40 crc kubenswrapper[4675]: I1121 13:32:40.752935 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 21 13:32:40 crc kubenswrapper[4675]: I1121 13:32:40.753939 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:40 crc kubenswrapper[4675]: I1121 13:32:40.754017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:40 crc kubenswrapper[4675]: I1121 13:32:40.754154 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:40 crc kubenswrapper[4675]: I1121 13:32:40.760538 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.047909 4675 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.065591 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.066201 4675 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.066279 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.067044 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.067147 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.067173 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.174669 4675 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.177302 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.610027 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.610320 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.611723 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.611770 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.611783 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.622878 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.672495 4675 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.780005 4675 apiserver.go:52] "Watching apiserver" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.841398 4675 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.841835 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.842439 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.842540 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.842961 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.842967 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.842647 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.843289 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.843300 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.843527 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.843623 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.846907 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.847161 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.847100 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.849089 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.849308 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.849585 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.849969 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.849995 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.850266 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.864709 4675 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879044 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879139 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879178 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879217 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879255 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879288 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879324 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879356 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879391 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879423 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879535 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879568 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879600 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879616 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879632 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879662 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879694 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879724 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879755 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879787 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879808 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879822 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879854 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879883 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879914 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879944 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.879973 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880002 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880033 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880064 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880121 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880150 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880180 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880209 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880239 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880268 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880296 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880328 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880465 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880503 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880543 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880534 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880574 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880609 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880644 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880673 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880797 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880804 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880826 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880838 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880869 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880897 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880926 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880957 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.880984 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881004 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881025 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881032 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881046 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881049 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881147 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881177 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881197 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881214 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881237 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881234 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881324 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881344 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881361 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881380 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881401 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881420 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881442 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881463 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881487 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881507 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881527 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881542 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881560 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881578 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881598 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881614 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881632 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881654 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881674 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881693 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881713 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881734 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881761 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881778 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881798 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881817 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881835 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881860 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881884 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881902 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881920 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881937 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881954 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881976 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881992 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882010 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882026 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882046 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882072 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882107 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882129 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882153 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882176 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882197 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882219 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882242 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882264 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882284 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882306 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882327 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882350 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882372 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882394 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882417 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882439 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882461 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882488 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882509 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882529 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882551 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882575 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882599 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882621 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882694 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882762 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882786 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882811 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882837 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882862 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882890 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882948 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882974 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.883000 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.883021 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.883046 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.883072 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884118 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884143 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884165 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884186 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884207 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884233 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884255 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884278 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884300 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884327 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884349 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884374 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884515 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884540 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884574 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884596 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884618 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884643 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884960 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884986 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885006 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885024 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885040 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885059 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885108 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885143 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885162 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885180 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885361 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885382 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885398 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885416 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885434 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885453 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885469 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885487 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885507 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885525 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885544 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885561 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885579 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885600 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885620 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885640 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885661 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885680 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885705 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885727 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885747 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885767 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885786 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885805 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885825 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885844 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885865 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885883 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885905 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885924 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885944 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885963 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885982 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886002 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886019 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886144 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886172 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886200 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886223 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886249 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886273 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886295 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886316 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886340 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886359 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886383 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886409 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886429 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886448 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886526 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886538 4675 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886549 4675 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886559 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886570 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886582 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886862 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886873 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886883 4675 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886895 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881632 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.904298 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881676 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.881818 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882325 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882561 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882572 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882808 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.882889 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.883120 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.883206 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.883310 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.883875 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.883999 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.883992 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884359 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.884616 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885478 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.885889 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886016 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886059 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886247 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886294 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886313 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.886334 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.893939 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.893967 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.894653 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.894733 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.894764 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.895142 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.895766 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.895992 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.896051 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.896100 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.896460 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.896473 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.897281 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.897381 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.897656 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.906886 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.898513 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.898571 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.906953 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.898821 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.898983 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.899128 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.899159 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.899065 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.899885 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.899998 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.900429 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.900507 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.900767 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.900862 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.900885 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.900905 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.901271 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.901413 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.901996 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.902018 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.902183 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.899587 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.902760 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.903141 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.903169 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.903798 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.903917 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.903998 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.904564 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.904931 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.904952 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.905209 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.905584 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.905600 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.906010 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.906297 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.906419 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.907267 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.907319 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.907320 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.902883 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.906573 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.902067 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.907420 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.906816 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.907460 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.897630 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.907044 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.907569 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.906616 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.907333 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.907299 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.907909 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.908561 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.908824 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.908905 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.908976 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.909095 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.909190 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.909197 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.908812 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.909492 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.909540 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.909570 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.909583 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.909789 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.909806 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.909706 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.909857 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.909882 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.910366 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.910476 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.910509 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.910613 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.910926 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.910952 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.911120 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.911148 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.911263 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.911449 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.912614 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.912734 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:42.412704838 +0000 UTC m=+39.139119565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.910815 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.913057 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.913074 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.913824 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.913860 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:42.413833517 +0000 UTC m=+39.140248454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.913165 4675 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.914359 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.914527 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.916658 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.917741 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.918224 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.918457 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.918608 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.918618 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.918724 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.913128 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.913194 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.913225 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.913264 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.913222 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.913476 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.913523 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.919003 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.919229 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.919299 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:32:42.419281785 +0000 UTC m=+39.145696592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.920025 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.920153 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.920389 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.925832 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.926159 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.926325 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.926870 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.927019 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.927914 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.928286 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.928785 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.928978 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.929637 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.929797 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.930099 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.930216 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.930225 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.930546 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.930803 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.930938 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.931146 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.931395 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.931922 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.931925 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.932157 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.933135 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.933473 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.935645 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.935666 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.935679 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.935742 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:42.435722865 +0000 UTC m=+39.162137592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.936443 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.936549 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.938508 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.938692 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.939328 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.939359 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.939472 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.939813 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.940343 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.941015 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.941035 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.941050 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:41 crc kubenswrapper[4675]: E1121 13:32:41.941251 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:42.441229595 +0000 UTC m=+39.167644332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.942234 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.942251 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.942634 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.943134 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.943205 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.944169 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.948398 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.952409 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.952809 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.953763 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.954103 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.954868 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.956522 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.957782 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.959600 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.961192 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.972005 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.977613 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988492 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988630 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988783 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988799 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988811 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988822 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988850 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988861 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988872 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988883 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988894 4675 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988921 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988933 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988943 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988955 4675 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988966 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.988976 4675 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989007 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989017 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989027 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989038 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989049 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989060 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989096 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989108 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989118 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989128 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989138 4675 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989166 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989177 4675 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989191 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989203 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989214 4675 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989242 4675 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989253 4675 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989264 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989274 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989285 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989296 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989323 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989335 4675 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989345 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989358 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989369 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989396 4675 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989407 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989418 4675 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989429 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989440 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989452 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989480 4675 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989490 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989501 4675 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989512 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989522 4675 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989533 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989560 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989570 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989580 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989592 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989603 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989612 4675 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989642 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989654 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989738 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989752 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989763 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989774 4675 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989803 4675 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989815 4675 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989825 4675 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989836 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989846 4675 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989859 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989888 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989900 4675 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989910 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989920 4675 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989930 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989941 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989969 4675 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989979 4675 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989990 4675 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.989999 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990010 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990021 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990046 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990057 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990087 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990098 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990109 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990120 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990131 4675 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990142 4675 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990175 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990187 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990201 4675 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990213 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990239 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990250 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990262 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990273 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990284 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990296 4675 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990326 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990337 4675 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990348 4675 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990358 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990368 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990378 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990406 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990418 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990429 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990441 4675 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990454 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990485 4675 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990495 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990504 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990514 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990524 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990535 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990563 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990574 4675 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990584 4675 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990595 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990607 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990584 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990633 4675 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990644 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990656 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990667 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990678 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990688 4675 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990717 4675 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990729 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990738 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990757 4675 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990767 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990794 4675 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990806 4675 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990817 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990828 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990839 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990849 4675 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990876 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990888 4675 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990897 4675 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990908 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990918 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990928 4675 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990956 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990966 4675 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990977 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990986 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.990997 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991021 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991032 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991042 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991053 4675 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991062 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991102 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991112 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991122 4675 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991131 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991141 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991151 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991149 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991161 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991171 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991182 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991192 4675 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991201 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991234 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991243 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991252 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991262 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991272 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991282 4675 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991291 4675 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991301 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991312 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991324 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991335 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991345 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991355 4675 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991364 4675 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991374 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.991384 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:41 crc kubenswrapper[4675]: I1121 13:32:41.994325 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.016291 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.030037 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.070795 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.071298 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.073636 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200" exitCode=255 Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.073758 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200"} Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.073860 4675 scope.go:117] "RemoveContainer" containerID="2e26dceaa47efcbd1fb4b146329c90f7c97bcaab12c6eb20883da12d95032e0a" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.101604 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.109882 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.109885 4675 scope.go:117] "RemoveContainer" containerID="60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200" Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.110138 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.110774 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.112987 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.124816 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.136170 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.152022 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.160760 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.162614 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.169169 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:42 crc kubenswrapper[4675]: W1121 13:32:42.175139 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-f7b69db37b49c40d28c335ed5ade0d6e82014548c64726c8c408adc3b2ea8bf1 WatchSource:0}: Error finding container f7b69db37b49c40d28c335ed5ade0d6e82014548c64726c8c408adc3b2ea8bf1: Status 404 returned error can't find the container with id f7b69db37b49c40d28c335ed5ade0d6e82014548c64726c8c408adc3b2ea8bf1 Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.175670 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:42 crc kubenswrapper[4675]: W1121 13:32:42.179469 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-bd6d9dc377f7881958504f98a41f2b3d1c8f635226ac23f61c0d042f37765e18 WatchSource:0}: Error finding container bd6d9dc377f7881958504f98a41f2b3d1c8f635226ac23f61c0d042f37765e18: Status 404 returned error can't find the container with id bd6d9dc377f7881958504f98a41f2b3d1c8f635226ac23f61c0d042f37765e18 Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.180189 4675 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 21 13:32:42 crc kubenswrapper[4675]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Nov 21 13:32:42 crc kubenswrapper[4675]: set -o allexport Nov 21 13:32:42 crc kubenswrapper[4675]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Nov 21 13:32:42 crc kubenswrapper[4675]: source /etc/kubernetes/apiserver-url.env Nov 21 13:32:42 crc kubenswrapper[4675]: else Nov 21 13:32:42 crc kubenswrapper[4675]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Nov 21 13:32:42 crc kubenswrapper[4675]: exit 1 Nov 21 13:32:42 crc kubenswrapper[4675]: fi Nov 21 13:32:42 crc kubenswrapper[4675]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Nov 21 13:32:42 crc kubenswrapper[4675]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Nov 21 13:32:42 crc kubenswrapper[4675]: > logger="UnhandledError" Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.181302 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.182108 4675 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 21 13:32:42 crc kubenswrapper[4675]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Nov 21 13:32:42 crc kubenswrapper[4675]: if [[ -f "/env/_master" ]]; then Nov 21 13:32:42 crc kubenswrapper[4675]: set -o allexport Nov 21 13:32:42 crc kubenswrapper[4675]: source "/env/_master" Nov 21 13:32:42 crc kubenswrapper[4675]: set +o allexport Nov 21 13:32:42 crc kubenswrapper[4675]: fi Nov 21 13:32:42 crc kubenswrapper[4675]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Nov 21 13:32:42 crc kubenswrapper[4675]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Nov 21 13:32:42 crc kubenswrapper[4675]: ho_enable="--enable-hybrid-overlay" Nov 21 13:32:42 crc kubenswrapper[4675]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Nov 21 13:32:42 crc kubenswrapper[4675]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Nov 21 13:32:42 crc kubenswrapper[4675]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Nov 21 13:32:42 crc kubenswrapper[4675]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Nov 21 13:32:42 crc kubenswrapper[4675]: --webhook-cert-dir="/etc/webhook-cert" \ Nov 21 13:32:42 crc kubenswrapper[4675]: --webhook-host=127.0.0.1 \ Nov 21 13:32:42 crc kubenswrapper[4675]: --webhook-port=9743 \ Nov 21 13:32:42 crc kubenswrapper[4675]: ${ho_enable} \ Nov 21 13:32:42 crc kubenswrapper[4675]: --enable-interconnect \ Nov 21 13:32:42 crc kubenswrapper[4675]: --disable-approver \ Nov 21 13:32:42 crc kubenswrapper[4675]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Nov 21 13:32:42 crc kubenswrapper[4675]: --wait-for-kubernetes-api=200s \ Nov 21 13:32:42 crc kubenswrapper[4675]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Nov 21 13:32:42 crc kubenswrapper[4675]: --loglevel="${LOGLEVEL}" Nov 21 13:32:42 crc kubenswrapper[4675]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Nov 21 13:32:42 crc kubenswrapper[4675]: > logger="UnhandledError" Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.184738 4675 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 21 13:32:42 crc kubenswrapper[4675]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Nov 21 13:32:42 crc kubenswrapper[4675]: if [[ -f "/env/_master" ]]; then Nov 21 13:32:42 crc kubenswrapper[4675]: set -o allexport Nov 21 13:32:42 crc kubenswrapper[4675]: source "/env/_master" Nov 21 13:32:42 crc kubenswrapper[4675]: set +o allexport Nov 21 13:32:42 crc kubenswrapper[4675]: fi Nov 21 13:32:42 crc kubenswrapper[4675]: Nov 21 13:32:42 crc kubenswrapper[4675]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Nov 21 13:32:42 crc kubenswrapper[4675]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Nov 21 13:32:42 crc kubenswrapper[4675]: --disable-webhook \ Nov 21 13:32:42 crc kubenswrapper[4675]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Nov 21 13:32:42 crc kubenswrapper[4675]: --loglevel="${LOGLEVEL}" Nov 21 13:32:42 crc kubenswrapper[4675]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Nov 21 13:32:42 crc kubenswrapper[4675]: > logger="UnhandledError" Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.185842 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.187275 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.188793 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.496710 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.496846 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.496884 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.496909 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.496939 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.497053 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.497151 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:32:43.497116668 +0000 UTC m=+40.223531425 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.497209 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:43.49719603 +0000 UTC m=+40.223610817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.497212 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.497270 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.497298 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.497297 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.497315 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.497324 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.497275 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.497361 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:43.497347893 +0000 UTC m=+40.223762690 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.497405 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:43.497391415 +0000 UTC m=+40.223806162 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:42 crc kubenswrapper[4675]: E1121 13:32:42.497421 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:43.497413575 +0000 UTC m=+40.223828312 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.853321 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.853936 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.855514 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.856313 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.857575 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.858165 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.858810 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.859805 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.860531 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.861461 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.861945 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.863154 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.863771 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.864409 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.865718 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.866551 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.867901 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.868428 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.869018 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.870185 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.870745 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.872025 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.872467 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.873551 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.874058 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.874792 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.876087 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.876552 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.877533 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.877989 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.879015 4675 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.879138 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.880864 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.881790 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.882237 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.883765 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.884394 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.885343 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.885983 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.887008 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.887528 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.888757 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.889421 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.890404 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.890859 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.891760 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.892260 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.893612 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.894156 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.894994 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.895482 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.896355 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.896909 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.897537 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 21 13:32:42 crc kubenswrapper[4675]: I1121 13:32:42.898402 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.079487 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bd6d9dc377f7881958504f98a41f2b3d1c8f635226ac23f61c0d042f37765e18"} Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.081032 4675 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 21 13:32:43 crc kubenswrapper[4675]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Nov 21 13:32:43 crc kubenswrapper[4675]: if [[ -f "/env/_master" ]]; then Nov 21 13:32:43 crc kubenswrapper[4675]: set -o allexport Nov 21 13:32:43 crc kubenswrapper[4675]: source "/env/_master" Nov 21 13:32:43 crc kubenswrapper[4675]: set +o allexport Nov 21 13:32:43 crc kubenswrapper[4675]: fi Nov 21 13:32:43 crc kubenswrapper[4675]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Nov 21 13:32:43 crc kubenswrapper[4675]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Nov 21 13:32:43 crc kubenswrapper[4675]: ho_enable="--enable-hybrid-overlay" Nov 21 13:32:43 crc kubenswrapper[4675]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Nov 21 13:32:43 crc kubenswrapper[4675]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Nov 21 13:32:43 crc kubenswrapper[4675]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Nov 21 13:32:43 crc kubenswrapper[4675]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Nov 21 13:32:43 crc kubenswrapper[4675]: --webhook-cert-dir="/etc/webhook-cert" \ Nov 21 13:32:43 crc kubenswrapper[4675]: --webhook-host=127.0.0.1 \ Nov 21 13:32:43 crc kubenswrapper[4675]: --webhook-port=9743 \ Nov 21 13:32:43 crc kubenswrapper[4675]: ${ho_enable} \ Nov 21 13:32:43 crc kubenswrapper[4675]: --enable-interconnect \ Nov 21 13:32:43 crc kubenswrapper[4675]: --disable-approver \ Nov 21 13:32:43 crc kubenswrapper[4675]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Nov 21 13:32:43 crc kubenswrapper[4675]: --wait-for-kubernetes-api=200s \ Nov 21 13:32:43 crc kubenswrapper[4675]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Nov 21 13:32:43 crc kubenswrapper[4675]: --loglevel="${LOGLEVEL}" Nov 21 13:32:43 crc kubenswrapper[4675]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Nov 21 13:32:43 crc kubenswrapper[4675]: > logger="UnhandledError" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.081053 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f7b69db37b49c40d28c335ed5ade0d6e82014548c64726c8c408adc3b2ea8bf1"} Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.083233 4675 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 21 13:32:43 crc kubenswrapper[4675]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Nov 21 13:32:43 crc kubenswrapper[4675]: set -o allexport Nov 21 13:32:43 crc kubenswrapper[4675]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Nov 21 13:32:43 crc kubenswrapper[4675]: source /etc/kubernetes/apiserver-url.env Nov 21 13:32:43 crc kubenswrapper[4675]: else Nov 21 13:32:43 crc kubenswrapper[4675]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Nov 21 13:32:43 crc kubenswrapper[4675]: exit 1 Nov 21 13:32:43 crc kubenswrapper[4675]: fi Nov 21 13:32:43 crc kubenswrapper[4675]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Nov 21 13:32:43 crc kubenswrapper[4675]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Nov 21 13:32:43 crc kubenswrapper[4675]: > logger="UnhandledError" Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.083385 4675 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 21 13:32:43 crc kubenswrapper[4675]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Nov 21 13:32:43 crc kubenswrapper[4675]: if [[ -f "/env/_master" ]]; then Nov 21 13:32:43 crc kubenswrapper[4675]: set -o allexport Nov 21 13:32:43 crc kubenswrapper[4675]: source "/env/_master" Nov 21 13:32:43 crc kubenswrapper[4675]: set +o allexport Nov 21 13:32:43 crc kubenswrapper[4675]: fi Nov 21 13:32:43 crc kubenswrapper[4675]: Nov 21 13:32:43 crc kubenswrapper[4675]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Nov 21 13:32:43 crc kubenswrapper[4675]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Nov 21 13:32:43 crc kubenswrapper[4675]: --disable-webhook \ Nov 21 13:32:43 crc kubenswrapper[4675]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Nov 21 13:32:43 crc kubenswrapper[4675]: --loglevel="${LOGLEVEL}" Nov 21 13:32:43 crc kubenswrapper[4675]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Nov 21 13:32:43 crc kubenswrapper[4675]: > logger="UnhandledError" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.083421 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.084303 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.084555 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.086732 4675 scope.go:117] "RemoveContainer" containerID="60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200" Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.086918 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.087516 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d2f043523fc237da4cfac63206ad3c061b2795e956b4add7e27a6aaab64ac329"} Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.088940 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.090306 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.099023 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.108945 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.120051 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.129569 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.148927 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.163033 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e26dceaa47efcbd1fb4b146329c90f7c97bcaab12c6eb20883da12d95032e0a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:28Z\\\",\\\"message\\\":\\\"W1121 13:32:27.247689 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1121 13:32:27.248116 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763731947 cert, and key in /tmp/serving-cert-163646942/serving-signer.crt, /tmp/serving-cert-163646942/serving-signer.key\\\\nI1121 13:32:27.754142 1 observer_polling.go:159] Starting file observer\\\\nW1121 13:32:27.757307 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:27.757528 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:27.759577 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-163646942/tls.crt::/tmp/serving-cert-163646942/tls.key\\\\\\\"\\\\nF1121 13:32:28.083628 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.177679 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.192901 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.205236 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.214359 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.221196 4675 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.224308 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.234569 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.246346 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.258891 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.270908 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.303150 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.507724 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.507820 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.507887 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:32:45.507849746 +0000 UTC m=+42.234264473 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.507919 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.507956 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.507989 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.508014 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.508105 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.508054 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.508173 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.508189 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.508259 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.508316 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.508340 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.508155 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:45.508129523 +0000 UTC m=+42.234544240 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.508654 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:45.508535913 +0000 UTC m=+42.234950670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.508854 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:45.508831621 +0000 UTC m=+42.235246378 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.509028 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:45.509009405 +0000 UTC m=+42.235424172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.848085 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.848088 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.848247 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:43 crc kubenswrapper[4675]: I1121 13:32:43.848115 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.848377 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:43 crc kubenswrapper[4675]: E1121 13:32:43.848462 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:44 crc kubenswrapper[4675]: I1121 13:32:44.092185 4675 scope.go:117] "RemoveContainer" containerID="60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200" Nov 21 13:32:44 crc kubenswrapper[4675]: E1121 13:32:44.092381 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 21 13:32:44 crc kubenswrapper[4675]: I1121 13:32:44.858999 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:44 crc kubenswrapper[4675]: I1121 13:32:44.869133 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:44 crc kubenswrapper[4675]: I1121 13:32:44.878996 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:44 crc kubenswrapper[4675]: I1121 13:32:44.888653 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:44 crc kubenswrapper[4675]: I1121 13:32:44.898326 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:44 crc kubenswrapper[4675]: I1121 13:32:44.924124 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:44 crc kubenswrapper[4675]: I1121 13:32:44.933163 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:44 crc kubenswrapper[4675]: I1121 13:32:44.945446 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.403353 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.409826 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.439471 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.450919 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.473661 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.506241 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.517004 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.527275 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.527324 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.527347 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.527370 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.527418 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:32:49.527389335 +0000 UTC m=+46.253804062 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.527471 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.527481 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.527537 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.527542 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.527563 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.527559 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:49.527547669 +0000 UTC m=+46.253962476 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.527581 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.527599 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:49.52758652 +0000 UTC m=+46.254001297 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.527486 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.527618 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:49.527607921 +0000 UTC m=+46.254022648 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.527623 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.527639 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.527669 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:49.527660942 +0000 UTC m=+46.254075749 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.527747 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.538512 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.548717 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.564624 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.587845 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.597950 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.612669 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.626084 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.636132 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.646342 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.654925 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.665722 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.685761 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.820310 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-vnxnx"] Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.820755 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.823373 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.823598 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hsw5h"] Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.823875 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.825452 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.827044 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.827203 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.827248 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.827349 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.827473 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.827662 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w28jn"] Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.827887 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.828007 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.828017 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.828340 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-bj56b"] Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.828526 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.829216 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vc5gn"] Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.829319 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.829459 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vc5gn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.832924 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.832925 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.832924 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.832985 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.833239 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.833287 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.833338 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.833383 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.833459 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.833463 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.833464 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.833535 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.845602 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.847920 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.847929 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.848011 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.848153 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.848267 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:45 crc kubenswrapper[4675]: E1121 13:32:45.848398 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.877314 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.903346 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.920902 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930495 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-var-lib-cni-multus\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930533 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/455c5b5a-917d-4361-bcc0-9283ffce0e86-multus-daemon-config\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930551 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-run-multus-certs\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930581 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-var-lib-openvswitch\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930599 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-cni-bin\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930689 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930746 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-etc-openvswitch\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930770 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-cni-netd\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930817 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-os-release\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930864 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-system-cni-dir\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930887 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-run-netns\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930906 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-etc-kubernetes\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930926 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-system-cni-dir\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930949 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-cnibin\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930973 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9wbm\" (UniqueName: \"kubernetes.io/projected/07ef5406-1758-498a-b74d-66ffdad6f318-kube-api-access-h9wbm\") pod \"node-resolver-vc5gn\" (UID: \"07ef5406-1758-498a-b74d-66ffdad6f318\") " pod="openshift-dns/node-resolver-vc5gn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.930994 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-var-lib-kubelet\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931016 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r5wh\" (UniqueName: \"kubernetes.io/projected/455c5b5a-917d-4361-bcc0-9283ffce0e86-kube-api-access-9r5wh\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931039 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-run-netns\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931060 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-multus-socket-dir-parent\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931102 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-multus-cni-dir\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931122 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-cnibin\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931144 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-node-log\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931189 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95hrv\" (UniqueName: \"kubernetes.io/projected/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-kube-api-access-95hrv\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931291 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7464\" (UniqueName: \"kubernetes.io/projected/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-kube-api-access-f7464\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931339 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-hostroot\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931361 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-multus-conf-dir\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931393 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-kubelet\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931421 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-run-k8s-cni-cncf-io\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931440 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovn-node-metrics-cert\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931460 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6db74e00-d40a-442b-b5b0-4d3b28e05178-mcd-auth-proxy-config\") pod \"machine-config-daemon-vnxnx\" (UID: \"6db74e00-d40a-442b-b5b0-4d3b28e05178\") " pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931478 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/455c5b5a-917d-4361-bcc0-9283ffce0e86-cni-binary-copy\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931509 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/07ef5406-1758-498a-b74d-66ffdad6f318-hosts-file\") pod \"node-resolver-vc5gn\" (UID: \"07ef5406-1758-498a-b74d-66ffdad6f318\") " pod="openshift-dns/node-resolver-vc5gn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931529 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-ovn\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931546 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-log-socket\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931566 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-env-overrides\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931586 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-run-ovn-kubernetes\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931610 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931634 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcmqs\" (UniqueName: \"kubernetes.io/projected/6db74e00-d40a-442b-b5b0-4d3b28e05178-kube-api-access-zcmqs\") pod \"machine-config-daemon-vnxnx\" (UID: \"6db74e00-d40a-442b-b5b0-4d3b28e05178\") " pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931660 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-slash\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931675 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931689 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931708 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-systemd\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931727 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovnkube-script-lib\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931747 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6db74e00-d40a-442b-b5b0-4d3b28e05178-rootfs\") pod \"machine-config-daemon-vnxnx\" (UID: \"6db74e00-d40a-442b-b5b0-4d3b28e05178\") " pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931768 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-var-lib-cni-bin\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931840 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6db74e00-d40a-442b-b5b0-4d3b28e05178-proxy-tls\") pod \"machine-config-daemon-vnxnx\" (UID: \"6db74e00-d40a-442b-b5b0-4d3b28e05178\") " pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931864 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-os-release\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931884 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-systemd-units\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931903 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-openvswitch\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.931921 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovnkube-config\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.933131 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.941824 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.959596 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.971519 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.980849 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:45 crc kubenswrapper[4675]: I1121 13:32:45.994126 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.002137 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.011033 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.027930 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.032821 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7464\" (UniqueName: \"kubernetes.io/projected/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-kube-api-access-f7464\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.032863 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-kubelet\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.032884 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-hostroot\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.032907 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-multus-conf-dir\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.032932 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-run-k8s-cni-cncf-io\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.032951 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovn-node-metrics-cert\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.032970 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6db74e00-d40a-442b-b5b0-4d3b28e05178-mcd-auth-proxy-config\") pod \"machine-config-daemon-vnxnx\" (UID: \"6db74e00-d40a-442b-b5b0-4d3b28e05178\") " pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.032990 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/455c5b5a-917d-4361-bcc0-9283ffce0e86-cni-binary-copy\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033009 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-env-overrides\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033027 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/07ef5406-1758-498a-b74d-66ffdad6f318-hosts-file\") pod \"node-resolver-vc5gn\" (UID: \"07ef5406-1758-498a-b74d-66ffdad6f318\") " pod="openshift-dns/node-resolver-vc5gn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033048 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-ovn\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033050 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-run-k8s-cni-cncf-io\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033145 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-multus-conf-dir\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033176 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/07ef5406-1758-498a-b74d-66ffdad6f318-hosts-file\") pod \"node-resolver-vc5gn\" (UID: \"07ef5406-1758-498a-b74d-66ffdad6f318\") " pod="openshift-dns/node-resolver-vc5gn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033190 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-ovn\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033117 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-log-socket\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033044 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-hostroot\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033084 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-log-socket\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033250 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-slash\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033271 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-run-ovn-kubernetes\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033294 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033297 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-run-ovn-kubernetes\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033317 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcmqs\" (UniqueName: \"kubernetes.io/projected/6db74e00-d40a-442b-b5b0-4d3b28e05178-kube-api-access-zcmqs\") pod \"machine-config-daemon-vnxnx\" (UID: \"6db74e00-d40a-442b-b5b0-4d3b28e05178\") " pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033115 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-kubelet\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033333 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-slash\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033338 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-var-lib-cni-bin\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033356 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034180 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033360 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-var-lib-cni-bin\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033788 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/455c5b5a-917d-4361-bcc0-9283ffce0e86-cni-binary-copy\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034025 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-env-overrides\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034208 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.033774 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6db74e00-d40a-442b-b5b0-4d3b28e05178-mcd-auth-proxy-config\") pod \"machine-config-daemon-vnxnx\" (UID: \"6db74e00-d40a-442b-b5b0-4d3b28e05178\") " pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034265 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-systemd\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034289 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovnkube-script-lib\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034308 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6db74e00-d40a-442b-b5b0-4d3b28e05178-rootfs\") pod \"machine-config-daemon-vnxnx\" (UID: \"6db74e00-d40a-442b-b5b0-4d3b28e05178\") " pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034327 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovnkube-config\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034351 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6db74e00-d40a-442b-b5b0-4d3b28e05178-proxy-tls\") pod \"machine-config-daemon-vnxnx\" (UID: \"6db74e00-d40a-442b-b5b0-4d3b28e05178\") " pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034370 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6db74e00-d40a-442b-b5b0-4d3b28e05178-rootfs\") pod \"machine-config-daemon-vnxnx\" (UID: \"6db74e00-d40a-442b-b5b0-4d3b28e05178\") " pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034379 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-os-release\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034428 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-systemd-units\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034455 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-openvswitch\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034470 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-systemd\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034475 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-var-lib-cni-multus\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034499 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-var-lib-cni-multus\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034505 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-systemd-units\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034530 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-openvswitch\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034519 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-os-release\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034534 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-var-lib-openvswitch\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034518 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-var-lib-openvswitch\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034593 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/455c5b5a-917d-4361-bcc0-9283ffce0e86-multus-daemon-config\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034620 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-run-multus-certs\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034668 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-cni-bin\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034685 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-run-multus-certs\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034690 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034712 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-cni-bin\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034714 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-etc-openvswitch\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034736 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-cni-netd\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034765 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-run-netns\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034787 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-os-release\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034822 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-system-cni-dir\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034845 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-var-lib-kubelet\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034865 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-etc-kubernetes\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034889 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-system-cni-dir\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034914 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-cnibin\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034939 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9wbm\" (UniqueName: \"kubernetes.io/projected/07ef5406-1758-498a-b74d-66ffdad6f318-kube-api-access-h9wbm\") pod \"node-resolver-vc5gn\" (UID: \"07ef5406-1758-498a-b74d-66ffdad6f318\") " pod="openshift-dns/node-resolver-vc5gn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034962 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-multus-socket-dir-parent\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034983 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r5wh\" (UniqueName: \"kubernetes.io/projected/455c5b5a-917d-4361-bcc0-9283ffce0e86-kube-api-access-9r5wh\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.034993 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-system-cni-dir\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035005 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-run-netns\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035008 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035018 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovnkube-script-lib\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035038 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-run-netns\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035092 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-cnibin\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035100 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-multus-socket-dir-parent\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035093 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-etc-openvswitch\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035121 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-cni-netd\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035141 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-var-lib-kubelet\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035154 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-host-run-netns\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035171 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-etc-kubernetes\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035180 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035200 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-os-release\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035196 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovnkube-config\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035203 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-system-cni-dir\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035037 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95hrv\" (UniqueName: \"kubernetes.io/projected/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-kube-api-access-95hrv\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035261 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-multus-cni-dir\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035207 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/455c5b5a-917d-4361-bcc0-9283ffce0e86-multus-daemon-config\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035281 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-cnibin\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035314 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-cnibin\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035329 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-node-log\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035387 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-node-log\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035450 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455c5b5a-917d-4361-bcc0-9283ffce0e86-multus-cni-dir\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.035588 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.037208 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovn-node-metrics-cert\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.039996 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6db74e00-d40a-442b-b5b0-4d3b28e05178-proxy-tls\") pod \"machine-config-daemon-vnxnx\" (UID: \"6db74e00-d40a-442b-b5b0-4d3b28e05178\") " pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.051655 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.052678 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcmqs\" (UniqueName: \"kubernetes.io/projected/6db74e00-d40a-442b-b5b0-4d3b28e05178-kube-api-access-zcmqs\") pod \"machine-config-daemon-vnxnx\" (UID: \"6db74e00-d40a-442b-b5b0-4d3b28e05178\") " pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.054324 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r5wh\" (UniqueName: \"kubernetes.io/projected/455c5b5a-917d-4361-bcc0-9283ffce0e86-kube-api-access-9r5wh\") pod \"multus-hsw5h\" (UID: \"455c5b5a-917d-4361-bcc0-9283ffce0e86\") " pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.055240 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9wbm\" (UniqueName: \"kubernetes.io/projected/07ef5406-1758-498a-b74d-66ffdad6f318-kube-api-access-h9wbm\") pod \"node-resolver-vc5gn\" (UID: \"07ef5406-1758-498a-b74d-66ffdad6f318\") " pod="openshift-dns/node-resolver-vc5gn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.055501 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95hrv\" (UniqueName: \"kubernetes.io/projected/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-kube-api-access-95hrv\") pod \"ovnkube-node-w28jn\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.059564 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7464\" (UniqueName: \"kubernetes.io/projected/ee0f125b-1d69-4a42-9d1e-14f3673a1cb8-kube-api-access-f7464\") pod \"multus-additional-cni-plugins-bj56b\" (UID: \"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\") " pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.070639 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.084178 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.093839 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.102991 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.118826 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.128281 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.135388 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.136898 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.144871 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hsw5h" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.144884 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: W1121 13:32:46.146116 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6db74e00_d40a_442b_b5b0_4d3b28e05178.slice/crio-ea9967b30f0462510a94d29fae3ec0c4f46235844adcc8fd0aff488e57624f3a WatchSource:0}: Error finding container ea9967b30f0462510a94d29fae3ec0c4f46235844adcc8fd0aff488e57624f3a: Status 404 returned error can't find the container with id ea9967b30f0462510a94d29fae3ec0c4f46235844adcc8fd0aff488e57624f3a Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.152974 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:46 crc kubenswrapper[4675]: W1121 13:32:46.157822 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod455c5b5a_917d_4361_bcc0_9283ffce0e86.slice/crio-f9f6defbfb0db953779139dd86369996ac5d3eef2680287d70c51a6a615fcf06 WatchSource:0}: Error finding container f9f6defbfb0db953779139dd86369996ac5d3eef2680287d70c51a6a615fcf06: Status 404 returned error can't find the container with id f9f6defbfb0db953779139dd86369996ac5d3eef2680287d70c51a6a615fcf06 Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.158147 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.160859 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bj56b" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.167221 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:46 crc kubenswrapper[4675]: I1121 13:32:46.168298 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vc5gn" Nov 21 13:32:46 crc kubenswrapper[4675]: W1121 13:32:46.185641 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd58cf4_de2e_4357_96eb_4fdb4694ea48.slice/crio-f45a993638e679390172a556f88aeea02ede4cd2c4c11895cf8d01986096798e WatchSource:0}: Error finding container f45a993638e679390172a556f88aeea02ede4cd2c4c11895cf8d01986096798e: Status 404 returned error can't find the container with id f45a993638e679390172a556f88aeea02ede4cd2c4c11895cf8d01986096798e Nov 21 13:32:46 crc kubenswrapper[4675]: W1121 13:32:46.193646 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee0f125b_1d69_4a42_9d1e_14f3673a1cb8.slice/crio-f6acca975012bacc8d7a0c331f26fa5c5f6e79b141c14a5190f7611b242fc9cd WatchSource:0}: Error finding container f6acca975012bacc8d7a0c331f26fa5c5f6e79b141c14a5190f7611b242fc9cd: Status 404 returned error can't find the container with id f6acca975012bacc8d7a0c331f26fa5c5f6e79b141c14a5190f7611b242fc9cd Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.099460 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vc5gn" event={"ID":"07ef5406-1758-498a-b74d-66ffdad6f318","Type":"ContainerStarted","Data":"8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911"} Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.099504 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vc5gn" event={"ID":"07ef5406-1758-498a-b74d-66ffdad6f318","Type":"ContainerStarted","Data":"cfda512d4d5659b2c7e33652250c43ac5281334c72bff832019da2369183cc86"} Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.100285 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" event={"ID":"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8","Type":"ContainerStarted","Data":"f6acca975012bacc8d7a0c331f26fa5c5f6e79b141c14a5190f7611b242fc9cd"} Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.101397 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsw5h" event={"ID":"455c5b5a-917d-4361-bcc0-9283ffce0e86","Type":"ContainerStarted","Data":"27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9"} Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.101416 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsw5h" event={"ID":"455c5b5a-917d-4361-bcc0-9283ffce0e86","Type":"ContainerStarted","Data":"f9f6defbfb0db953779139dd86369996ac5d3eef2680287d70c51a6a615fcf06"} Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.103481 4675 generic.go:334] "Generic (PLEG): container finished" podID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerID="6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c" exitCode=0 Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.103547 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerDied","Data":"6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c"} Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.103565 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerStarted","Data":"f45a993638e679390172a556f88aeea02ede4cd2c4c11895cf8d01986096798e"} Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.104875 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434"} Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.104906 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"ea9967b30f0462510a94d29fae3ec0c4f46235844adcc8fd0aff488e57624f3a"} Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.617222 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-77cmk"] Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.617570 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-77cmk" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.619693 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.620112 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.620144 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.621556 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.632251 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.642733 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.649951 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b683f87-49b9-47b9-bce6-62c5df20b364-host\") pod \"node-ca-77cmk\" (UID: \"8b683f87-49b9-47b9-bce6-62c5df20b364\") " pod="openshift-image-registry/node-ca-77cmk" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.650131 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8hbw\" (UniqueName: \"kubernetes.io/projected/8b683f87-49b9-47b9-bce6-62c5df20b364-kube-api-access-s8hbw\") pod \"node-ca-77cmk\" (UID: \"8b683f87-49b9-47b9-bce6-62c5df20b364\") " pod="openshift-image-registry/node-ca-77cmk" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.650174 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8b683f87-49b9-47b9-bce6-62c5df20b364-serviceca\") pod \"node-ca-77cmk\" (UID: \"8b683f87-49b9-47b9-bce6-62c5df20b364\") " pod="openshift-image-registry/node-ca-77cmk" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.659560 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.670915 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.680626 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.688907 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.697150 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.704418 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.712854 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.723756 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.732298 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.740113 4675 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.745670 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.750705 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8b683f87-49b9-47b9-bce6-62c5df20b364-serviceca\") pod \"node-ca-77cmk\" (UID: \"8b683f87-49b9-47b9-bce6-62c5df20b364\") " pod="openshift-image-registry/node-ca-77cmk" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.750749 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b683f87-49b9-47b9-bce6-62c5df20b364-host\") pod \"node-ca-77cmk\" (UID: \"8b683f87-49b9-47b9-bce6-62c5df20b364\") " pod="openshift-image-registry/node-ca-77cmk" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.750821 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8hbw\" (UniqueName: \"kubernetes.io/projected/8b683f87-49b9-47b9-bce6-62c5df20b364-kube-api-access-s8hbw\") pod \"node-ca-77cmk\" (UID: \"8b683f87-49b9-47b9-bce6-62c5df20b364\") " pod="openshift-image-registry/node-ca-77cmk" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.750981 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8b683f87-49b9-47b9-bce6-62c5df20b364-host\") pod \"node-ca-77cmk\" (UID: \"8b683f87-49b9-47b9-bce6-62c5df20b364\") " pod="openshift-image-registry/node-ca-77cmk" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.752176 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8b683f87-49b9-47b9-bce6-62c5df20b364-serviceca\") pod \"node-ca-77cmk\" (UID: \"8b683f87-49b9-47b9-bce6-62c5df20b364\") " pod="openshift-image-registry/node-ca-77cmk" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.759886 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.768324 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.774498 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8hbw\" (UniqueName: \"kubernetes.io/projected/8b683f87-49b9-47b9-bce6-62c5df20b364-kube-api-access-s8hbw\") pod \"node-ca-77cmk\" (UID: \"8b683f87-49b9-47b9-bce6-62c5df20b364\") " pod="openshift-image-registry/node-ca-77cmk" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.778403 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.848348 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.848394 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.848374 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:47 crc kubenswrapper[4675]: E1121 13:32:47.848524 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:47 crc kubenswrapper[4675]: E1121 13:32:47.848599 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:47 crc kubenswrapper[4675]: E1121 13:32:47.848699 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:47 crc kubenswrapper[4675]: I1121 13:32:47.939747 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-77cmk" Nov 21 13:32:47 crc kubenswrapper[4675]: W1121 13:32:47.952228 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b683f87_49b9_47b9_bce6_62c5df20b364.slice/crio-a7400274613e0308b399d976c6605d5ef5e4678c088e8a5f358f2ed52e554c6b WatchSource:0}: Error finding container a7400274613e0308b399d976c6605d5ef5e4678c088e8a5f358f2ed52e554c6b: Status 404 returned error can't find the container with id a7400274613e0308b399d976c6605d5ef5e4678c088e8a5f358f2ed52e554c6b Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.110054 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c"} Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.111120 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-77cmk" event={"ID":"8b683f87-49b9-47b9-bce6-62c5df20b364","Type":"ContainerStarted","Data":"a7400274613e0308b399d976c6605d5ef5e4678c088e8a5f358f2ed52e554c6b"} Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.112542 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" event={"ID":"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8","Type":"ContainerStarted","Data":"70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087"} Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.120355 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.136054 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.148239 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.157701 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.169105 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.177988 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.179643 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.179674 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.179683 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.179792 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.180111 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.188688 4675 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.188933 4675 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.190178 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.190216 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.190226 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.190246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.190255 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.191977 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.201026 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: E1121 13:32:48.202831 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.206031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.206087 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.206095 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.206111 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.206120 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.211595 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: E1121 13:32:48.213910 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.220997 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.221467 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.221500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.221510 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.221526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.221535 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4675]: E1121 13:32:48.229870 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.233062 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.233132 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.233145 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.233161 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.233173 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.236964 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: E1121 13:32:48.241610 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.244396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.244432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.244442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.244457 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.244470 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.246159 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.251768 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: E1121 13:32:48.252031 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: E1121 13:32:48.252207 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.253685 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.253714 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.253725 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.253740 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.253750 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.260089 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.268579 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.276103 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.285459 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.299287 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.310171 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.317115 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.326898 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.335098 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.351176 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.355465 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.355494 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.355504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.355519 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.355530 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.365015 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.376331 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.386544 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.394550 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.409906 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.419115 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.427351 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.457412 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.457447 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.457457 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.457471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.457480 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.559133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.559172 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.559184 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.559203 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.559215 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.661755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.662100 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.662113 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.662131 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.662145 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.763994 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.764321 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.764806 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.764969 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.765111 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.867503 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.867531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.867538 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.867551 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.867560 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.969764 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.969802 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.969814 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.969829 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4675]: I1121 13:32:48.969841 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.073250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.073288 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.073296 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.073310 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.073319 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.117113 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerStarted","Data":"8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.117156 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerStarted","Data":"4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.117166 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerStarted","Data":"cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.118177 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-77cmk" event={"ID":"8b683f87-49b9-47b9-bce6-62c5df20b364","Type":"ContainerStarted","Data":"a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.120222 4675 generic.go:334] "Generic (PLEG): container finished" podID="ee0f125b-1d69-4a42-9d1e-14f3673a1cb8" containerID="70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087" exitCode=0 Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.120819 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" event={"ID":"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8","Type":"ContainerDied","Data":"70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.130732 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.138788 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.146433 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.153969 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.161105 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.176678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.176706 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.176715 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.176729 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.176738 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.186305 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.196761 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.204327 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.220852 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.232226 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.238866 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.247934 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.258114 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.266745 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.276289 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.278700 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.278729 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.278738 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.278752 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.278761 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.292341 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.303557 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.311812 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.321600 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.331787 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.340496 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.349525 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.362375 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.371886 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.381778 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.381821 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.381836 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.381854 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.381865 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.384163 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.394999 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.401021 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.418506 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.427573 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.437783 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.484358 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.484400 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.484412 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.484430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.484441 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.564932 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.565044 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.565121 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:32:57.565060037 +0000 UTC m=+54.291474804 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.565141 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.565181 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:57.56517072 +0000 UTC m=+54.291585447 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.565178 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.565224 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.565268 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.565343 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.565364 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.565372 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.565377 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.565385 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.565413 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:57.565403996 +0000 UTC m=+54.291818723 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.565435 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:57.565420906 +0000 UTC m=+54.291835673 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.565387 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.565461 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.565502 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:57.565488938 +0000 UTC m=+54.291903695 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.586651 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.586689 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.586700 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.586718 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.586729 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.689046 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.689104 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.689115 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.689130 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.689139 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.791677 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.791730 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.791740 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.791754 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.791763 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.848444 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.848482 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.848545 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.848581 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.848717 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:49 crc kubenswrapper[4675]: E1121 13:32:49.848779 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.894389 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.894434 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.894443 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.894457 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.894466 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.996914 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.996952 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.996963 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.996977 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4675]: I1121 13:32:49.996985 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.099038 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.099100 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.099112 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.099128 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.099140 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.124531 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" event={"ID":"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8","Type":"ContainerStarted","Data":"8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1"} Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.129663 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerStarted","Data":"73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4"} Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.129708 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerStarted","Data":"220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7"} Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.129719 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerStarted","Data":"f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3"} Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.136588 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.145915 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.161841 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.173833 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.181491 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.191779 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.242543 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.243343 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.243377 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.243390 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.243408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.243421 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.256709 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.266718 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.277782 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.286861 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.294590 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.300481 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.308141 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.316994 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.345815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.345846 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.345853 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.345866 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.345875 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.448157 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.448491 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.448637 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.448766 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.448886 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.552609 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.552875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.552985 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.553095 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.553186 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.655877 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.655959 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.655984 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.656014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.656038 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.758626 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.758668 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.758680 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.758698 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.758711 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.860246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.860508 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.860592 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.860678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.860775 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.963300 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.963339 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.963348 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.963364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4675]: I1121 13:32:50.963375 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.066906 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.066961 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.066979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.067003 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.067021 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.169704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.170597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.170627 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.170651 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.170668 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.273491 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.273551 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.273567 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.273606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.273623 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.377156 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.377223 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.377246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.377273 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.377297 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.479996 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.480035 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.480043 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.480056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.480088 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.582218 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.582255 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.582263 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.582277 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.582286 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.686284 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.686662 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.686884 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.687099 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.687302 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.791277 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.791333 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.791354 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.791379 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.791400 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.848660 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.848727 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:51 crc kubenswrapper[4675]: E1121 13:32:51.848848 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:51 crc kubenswrapper[4675]: E1121 13:32:51.849286 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.849467 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:51 crc kubenswrapper[4675]: E1121 13:32:51.849705 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.894057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.894114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.894123 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.894136 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.894145 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.996798 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.996849 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.996865 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.996889 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4675]: I1121 13:32:51.996908 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.099592 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.099642 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.099697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.099722 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.099739 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.138800 4675 generic.go:334] "Generic (PLEG): container finished" podID="ee0f125b-1d69-4a42-9d1e-14f3673a1cb8" containerID="8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1" exitCode=0 Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.138865 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" event={"ID":"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8","Type":"ContainerDied","Data":"8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1"} Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.153461 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.174754 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.193365 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.204170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.204247 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.204263 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.204328 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.204340 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.205683 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.216505 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.234968 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.246311 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.260891 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.271757 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.282036 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.292927 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.303965 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.306944 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.306970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.306979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.306992 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.307001 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.317031 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.331042 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.343954 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.409769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.409807 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.409816 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.409831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.409842 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.513302 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.513365 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.513385 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.513409 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.513429 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.616482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.616960 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.617042 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.617167 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.617260 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.720248 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.720570 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.720916 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.721261 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.721577 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.824507 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.824827 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.824920 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.825011 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.825234 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.928723 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.929014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.929141 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.929234 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4675]: I1121 13:32:52.929351 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.032641 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.032892 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.033028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.033138 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.033200 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.136397 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.136629 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.136701 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.136771 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.136837 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.240195 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.240241 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.240252 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.240271 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.240282 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.343432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.343493 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.343511 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.343537 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.343581 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.446600 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.446699 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.446716 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.446739 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.446756 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.550374 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.550416 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.550433 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.550456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.550470 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.653792 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.654046 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.654231 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.654456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.654666 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.757677 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.757717 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.757731 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.757750 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.757764 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.848537 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.848599 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.848599 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:53 crc kubenswrapper[4675]: E1121 13:32:53.849143 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:53 crc kubenswrapper[4675]: E1121 13:32:53.849191 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:53 crc kubenswrapper[4675]: E1121 13:32:53.849030 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.860225 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.860289 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.860312 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.860356 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.860379 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.962101 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.962132 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.962141 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.962154 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4675]: I1121 13:32:53.962162 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.065008 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.065043 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.065051 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.065067 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.065098 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.150438 4675 generic.go:334] "Generic (PLEG): container finished" podID="ee0f125b-1d69-4a42-9d1e-14f3673a1cb8" containerID="77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea" exitCode=0 Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.150497 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" event={"ID":"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8","Type":"ContainerDied","Data":"77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea"} Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.157758 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerStarted","Data":"5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc"} Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.168749 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.168969 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.169153 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.169136 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.169290 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.169486 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.179445 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.196580 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.206482 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.214353 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.235397 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.249308 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.258272 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.273514 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.273678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.273694 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.273719 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.273735 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.274998 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.287172 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.294103 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.304485 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.314131 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.323222 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.333500 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.375800 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.376025 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.376137 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.376237 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.376332 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.479092 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.479135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.479151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.479176 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.479193 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.582648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.582704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.582725 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.582753 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.582773 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.685817 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.685881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.685904 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.685933 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.685956 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.788812 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.788859 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.788873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.788893 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.788909 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.849483 4675 scope.go:117] "RemoveContainer" containerID="60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.867677 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.883057 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.891988 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.892060 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.892114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.892148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.892177 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.901224 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.912414 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.994172 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.994210 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.994222 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.994238 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4675]: I1121 13:32:54.994250 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.001751 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.014900 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.024452 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.036252 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.046803 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.054357 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.064424 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.074302 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.089022 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.096315 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.096345 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.096353 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.096368 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.096377 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.098089 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.105288 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.163060 4675 generic.go:334] "Generic (PLEG): container finished" podID="ee0f125b-1d69-4a42-9d1e-14f3673a1cb8" containerID="391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1" exitCode=0 Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.163104 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" event={"ID":"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8","Type":"ContainerDied","Data":"391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1"} Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.174985 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.198623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.198656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.198664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.198679 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.198689 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.206658 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.224484 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.234143 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.246156 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.284109 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.296935 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.301740 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.301775 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.301786 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.301803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.301814 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.309028 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.321110 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.330221 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.343543 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.353512 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.359917 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.367844 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.375181 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.404378 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.404420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.404432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.404451 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.404466 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.506655 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.506697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.506708 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.506724 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.506735 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.610913 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.611109 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.611145 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.611229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.611258 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.714964 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.715387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.715404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.715424 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.715436 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.818360 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.818443 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.818461 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.818485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.818503 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.848980 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:55 crc kubenswrapper[4675]: E1121 13:32:55.849194 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.849309 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:55 crc kubenswrapper[4675]: E1121 13:32:55.849430 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.849308 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:55 crc kubenswrapper[4675]: E1121 13:32:55.849804 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.920999 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.921032 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.921043 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.921059 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4675]: I1121 13:32:55.921086 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.023757 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.023809 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.023826 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.023850 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.023867 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.126318 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.126367 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.126378 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.126396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.126408 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.169020 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.171784 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f"} Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.228903 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.229344 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.229365 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.229379 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.229389 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.333415 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.333485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.333498 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.333518 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.333560 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.437482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.437511 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.437519 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.437532 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.437540 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.540800 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.540925 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.541019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.542442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.542496 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.645460 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.645500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.645511 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.645529 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.645543 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.748612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.748682 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.748704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.748733 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.748751 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.851462 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.851485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.851493 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.851506 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.851515 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.900053 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.909312 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.913509 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.921920 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.948187 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.954972 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.955022 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.955031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.955048 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.955059 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.958648 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.967634 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.975290 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.983994 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:56 crc kubenswrapper[4675]: I1121 13:32:56.994304 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.002911 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.011420 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.033294 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.050048 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.057903 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.057948 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.057964 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.057984 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.057999 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.060042 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.071161 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.079727 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.160629 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.160668 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.160678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.160694 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.160706 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.187845 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" event={"ID":"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8","Type":"ContainerStarted","Data":"1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03"} Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.192090 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerStarted","Data":"094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1"} Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.193669 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8"} Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.194432 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.204010 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.214246 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.233901 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.244817 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.254431 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.263180 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.263218 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.263227 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.263243 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.263256 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.264724 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.273573 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.281016 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.288850 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.296520 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.306322 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.314808 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.323492 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.342052 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.359229 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.365582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.365820 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.366017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.366254 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.366594 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.371642 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.470689 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.470768 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.470792 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.470822 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.470845 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.573513 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.573551 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.573560 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.573576 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.573606 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.656429 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.656538 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.656588 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.656612 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.656634 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.656752 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.656819 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.656836 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.656850 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.656871 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.656752 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:33:13.656712875 +0000 UTC m=+70.383127642 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.656923 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:33:13.65690608 +0000 UTC m=+70.383320817 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.656771 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.656944 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.656954 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.656947 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:33:13.6569308 +0000 UTC m=+70.383345567 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.656994 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:33:13.656985732 +0000 UTC m=+70.383400469 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.657009 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:33:13.657002322 +0000 UTC m=+70.383417069 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.676687 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.676717 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.676726 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.676742 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.676752 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.779273 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.779310 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.779320 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.779338 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.779349 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.848779 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.848855 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.848957 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.848971 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.849166 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:57 crc kubenswrapper[4675]: E1121 13:32:57.849267 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.882005 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.882093 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.882105 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.882122 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.882136 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.983954 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.984006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.984020 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.984039 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4675]: I1121 13:32:57.984051 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.086645 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.086675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.086684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.086697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.086709 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.100850 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf"] Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.101356 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.103864 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.104024 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.123787 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.135952 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.146857 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.158247 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.161792 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pftq9\" (UniqueName: \"kubernetes.io/projected/21162055-1a92-4e7b-9717-ce6462331212-kube-api-access-pftq9\") pod \"ovnkube-control-plane-749d76644c-rkqqf\" (UID: \"21162055-1a92-4e7b-9717-ce6462331212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.161873 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21162055-1a92-4e7b-9717-ce6462331212-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rkqqf\" (UID: \"21162055-1a92-4e7b-9717-ce6462331212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.161909 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21162055-1a92-4e7b-9717-ce6462331212-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rkqqf\" (UID: \"21162055-1a92-4e7b-9717-ce6462331212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.161976 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21162055-1a92-4e7b-9717-ce6462331212-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rkqqf\" (UID: \"21162055-1a92-4e7b-9717-ce6462331212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.172923 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.185136 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.189311 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.189372 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.189397 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.189429 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.189451 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.195475 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.199477 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.201039 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.202308 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.202369 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.202465 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.213129 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.228816 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.241221 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.241494 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.249966 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.258213 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.263351 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21162055-1a92-4e7b-9717-ce6462331212-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rkqqf\" (UID: \"21162055-1a92-4e7b-9717-ce6462331212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.263393 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21162055-1a92-4e7b-9717-ce6462331212-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rkqqf\" (UID: \"21162055-1a92-4e7b-9717-ce6462331212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.263482 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21162055-1a92-4e7b-9717-ce6462331212-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rkqqf\" (UID: \"21162055-1a92-4e7b-9717-ce6462331212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.263599 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pftq9\" (UniqueName: \"kubernetes.io/projected/21162055-1a92-4e7b-9717-ce6462331212-kube-api-access-pftq9\") pod \"ovnkube-control-plane-749d76644c-rkqqf\" (UID: \"21162055-1a92-4e7b-9717-ce6462331212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.264605 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21162055-1a92-4e7b-9717-ce6462331212-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rkqqf\" (UID: \"21162055-1a92-4e7b-9717-ce6462331212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.265618 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21162055-1a92-4e7b-9717-ce6462331212-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rkqqf\" (UID: \"21162055-1a92-4e7b-9717-ce6462331212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.268416 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21162055-1a92-4e7b-9717-ce6462331212-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rkqqf\" (UID: \"21162055-1a92-4e7b-9717-ce6462331212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.277204 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.289846 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.293853 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.293902 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.293915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.293932 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.293944 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.298694 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.299707 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pftq9\" (UniqueName: \"kubernetes.io/projected/21162055-1a92-4e7b-9717-ce6462331212-kube-api-access-pftq9\") pod \"ovnkube-control-plane-749d76644c-rkqqf\" (UID: \"21162055-1a92-4e7b-9717-ce6462331212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.307410 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.314454 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.324147 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.340788 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.350747 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.361630 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.370953 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.370988 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.370997 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.371013 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.371023 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.371196 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.380914 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: E1121 13:32:58.381127 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.384752 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.384776 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.384784 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.384797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.384806 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.392156 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: E1121 13:32:58.395528 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.399662 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.399722 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.399731 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.399749 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.399759 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.402301 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: E1121 13:32:58.411527 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.412850 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.416286 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.416860 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.416909 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.416926 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.416953 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.416975 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.424623 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.432974 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: E1121 13:32:58.434887 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.440460 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.440509 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.440526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.440550 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.440567 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.443318 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: E1121 13:32:58.452437 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: E1121 13:32:58.452658 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.454545 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.454581 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.454593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.454613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.454625 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.459095 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.475997 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.491176 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.498038 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.507908 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.518048 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.557872 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.557921 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.557932 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.557953 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.557965 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.660348 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.660387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.660399 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.660415 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.660426 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.763755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.763799 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.763814 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.763830 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.763841 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.867204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.867289 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.867312 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.867342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.867362 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.970940 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.970990 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.971002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.971019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4675]: I1121 13:32:58.971032 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.074017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.074144 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.074167 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.074195 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.074214 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.163395 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-djn7k"] Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.163791 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:32:59 crc kubenswrapper[4675]: E1121 13:32:59.163852 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.176732 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.176776 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.176791 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.176809 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.176686 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.176822 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.188763 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.205866 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" event={"ID":"21162055-1a92-4e7b-9717-ce6462331212","Type":"ContainerStarted","Data":"62171852d321626d8ab696668dc5e6e4b265231ff80b85c0cd2e7189fdca93f7"} Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.212394 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.236587 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.252635 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.269031 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.271480 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sblr\" (UniqueName: \"kubernetes.io/projected/3034a641-e8c3-4303-bb0e-1da29de3a41b-kube-api-access-6sblr\") pod \"network-metrics-daemon-djn7k\" (UID: \"3034a641-e8c3-4303-bb0e-1da29de3a41b\") " pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.271542 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs\") pod \"network-metrics-daemon-djn7k\" (UID: \"3034a641-e8c3-4303-bb0e-1da29de3a41b\") " pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.280008 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.280044 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.280055 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.280098 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.280122 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.292353 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.311428 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.326145 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.348663 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.363670 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.373442 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs\") pod \"network-metrics-daemon-djn7k\" (UID: \"3034a641-e8c3-4303-bb0e-1da29de3a41b\") " pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:32:59 crc kubenswrapper[4675]: E1121 13:32:59.373575 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:32:59 crc kubenswrapper[4675]: E1121 13:32:59.373717 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs podName:3034a641-e8c3-4303-bb0e-1da29de3a41b nodeName:}" failed. No retries permitted until 2025-11-21 13:32:59.8737019 +0000 UTC m=+56.600116637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs") pod "network-metrics-daemon-djn7k" (UID: "3034a641-e8c3-4303-bb0e-1da29de3a41b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.373929 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sblr\" (UniqueName: \"kubernetes.io/projected/3034a641-e8c3-4303-bb0e-1da29de3a41b-kube-api-access-6sblr\") pod \"network-metrics-daemon-djn7k\" (UID: \"3034a641-e8c3-4303-bb0e-1da29de3a41b\") " pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.377924 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.383323 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.383371 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.383389 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.383420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.383437 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.393032 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.395199 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sblr\" (UniqueName: \"kubernetes.io/projected/3034a641-e8c3-4303-bb0e-1da29de3a41b-kube-api-access-6sblr\") pod \"network-metrics-daemon-djn7k\" (UID: \"3034a641-e8c3-4303-bb0e-1da29de3a41b\") " pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.409238 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.428905 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.446013 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.463353 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.475717 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.485879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.485907 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.485918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.485934 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.485944 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.589271 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.589347 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.589366 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.589391 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.589410 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.691876 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.691923 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.691933 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.691953 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.691965 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.795803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.795873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.795894 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.795920 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.795942 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.848960 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.849091 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.848982 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:59 crc kubenswrapper[4675]: E1121 13:32:59.849275 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:59 crc kubenswrapper[4675]: E1121 13:32:59.849379 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:59 crc kubenswrapper[4675]: E1121 13:32:59.849472 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.881844 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs\") pod \"network-metrics-daemon-djn7k\" (UID: \"3034a641-e8c3-4303-bb0e-1da29de3a41b\") " pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:32:59 crc kubenswrapper[4675]: E1121 13:32:59.882003 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:32:59 crc kubenswrapper[4675]: E1121 13:32:59.882119 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs podName:3034a641-e8c3-4303-bb0e-1da29de3a41b nodeName:}" failed. No retries permitted until 2025-11-21 13:33:00.882094211 +0000 UTC m=+57.608508968 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs") pod "network-metrics-daemon-djn7k" (UID: "3034a641-e8c3-4303-bb0e-1da29de3a41b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.898968 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.899026 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.899043 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.899094 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4675]: I1121 13:32:59.899113 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.002258 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.002319 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.002340 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.002365 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.002383 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.105669 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.105742 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.105765 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.105791 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.105812 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.209104 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.209165 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.209185 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.209213 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.209231 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.311982 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.312021 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.312036 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.312051 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.312062 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.414977 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.415049 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.415109 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.415138 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.415162 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.519189 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.519250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.519309 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.519343 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.519364 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.623034 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.623126 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.623144 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.623170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.623245 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.725517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.725556 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.725567 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.725587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.725599 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.827976 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.828050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.828138 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.828174 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.828210 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.848732 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:00 crc kubenswrapper[4675]: E1121 13:33:00.848954 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.893432 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs\") pod \"network-metrics-daemon-djn7k\" (UID: \"3034a641-e8c3-4303-bb0e-1da29de3a41b\") " pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:00 crc kubenswrapper[4675]: E1121 13:33:00.893582 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:00 crc kubenswrapper[4675]: E1121 13:33:00.893635 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs podName:3034a641-e8c3-4303-bb0e-1da29de3a41b nodeName:}" failed. No retries permitted until 2025-11-21 13:33:02.89361795 +0000 UTC m=+59.620032677 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs") pod "network-metrics-daemon-djn7k" (UID: "3034a641-e8c3-4303-bb0e-1da29de3a41b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.931534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.931590 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.931606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.931630 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4675]: I1121 13:33:00.931647 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.033667 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.033902 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.033911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.033924 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.033932 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.137304 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.137331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.137355 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.137369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.137383 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.216815 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" event={"ID":"21162055-1a92-4e7b-9717-ce6462331212","Type":"ContainerStarted","Data":"d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9"} Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.240394 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.240455 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.240484 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.240514 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.240537 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.343597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.343696 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.343714 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.343740 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.343758 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.446930 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.446984 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.447002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.447033 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.447056 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.550251 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.550319 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.550338 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.550366 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.550384 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.653374 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.653437 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.653453 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.653476 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.653493 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.755641 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.755923 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.756010 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.756117 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.756264 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.847948 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.848031 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.848111 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:01 crc kubenswrapper[4675]: E1121 13:33:01.848099 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:01 crc kubenswrapper[4675]: E1121 13:33:01.848260 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:01 crc kubenswrapper[4675]: E1121 13:33:01.848375 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.862483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.862529 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.862540 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.862559 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.862570 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.964742 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.964789 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.964801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.964819 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4675]: I1121 13:33:01.964830 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.068454 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.068490 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.068505 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.068526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.068540 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.173266 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.173322 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.173342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.173367 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.173386 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.227977 4675 generic.go:334] "Generic (PLEG): container finished" podID="ee0f125b-1d69-4a42-9d1e-14f3673a1cb8" containerID="1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03" exitCode=0 Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.228048 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" event={"ID":"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8","Type":"ContainerDied","Data":"1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03"} Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.234012 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef"} Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.247714 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.260690 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.289481 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.299911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.299955 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.299973 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.299997 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.300016 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.317900 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.344672 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.361495 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.373532 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.382305 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.390703 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.401601 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.403586 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.403630 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.403642 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.403661 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.403673 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.413409 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.427648 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.450770 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.465684 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.478135 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.490091 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.506308 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.506497 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.506521 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.506530 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.506544 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.506554 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.516645 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.609136 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.609200 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.609217 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.609242 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.609259 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.712901 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.713002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.713026 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.713093 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.713112 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.815158 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.815213 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.815225 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.815243 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.815256 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.848918 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:02 crc kubenswrapper[4675]: E1121 13:33:02.849116 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.915631 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs\") pod \"network-metrics-daemon-djn7k\" (UID: \"3034a641-e8c3-4303-bb0e-1da29de3a41b\") " pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:02 crc kubenswrapper[4675]: E1121 13:33:02.915773 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:02 crc kubenswrapper[4675]: E1121 13:33:02.915825 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs podName:3034a641-e8c3-4303-bb0e-1da29de3a41b nodeName:}" failed. No retries permitted until 2025-11-21 13:33:06.915809236 +0000 UTC m=+63.642223963 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs") pod "network-metrics-daemon-djn7k" (UID: "3034a641-e8c3-4303-bb0e-1da29de3a41b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.920219 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.920250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.920258 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.920273 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4675]: I1121 13:33:02.920282 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.022734 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.022762 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.022770 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.022784 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.022792 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.125473 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.125507 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.125515 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.125529 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.125538 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.227766 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.227821 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.227837 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.227855 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.227866 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.249235 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.267091 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.297700 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.318896 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.330574 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.330646 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.330668 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.330697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.330719 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.337046 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.355334 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.375654 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.390020 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.415292 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.428364 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.432382 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.432612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.432705 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.432790 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.432879 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.442520 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.455109 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.466364 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.477938 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.492462 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.502821 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.518164 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.532320 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.534718 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.534752 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.534762 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.534776 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.534786 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.637023 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.637122 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.637148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.637184 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.637208 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.739692 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.739725 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.739733 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.739747 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.739755 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.842586 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.842623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.842635 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.842652 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.842663 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.848383 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:03 crc kubenswrapper[4675]: E1121 13:33:03.848566 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.848434 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:03 crc kubenswrapper[4675]: E1121 13:33:03.848798 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.848429 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:03 crc kubenswrapper[4675]: E1121 13:33:03.849027 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.944474 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.944823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.945000 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.945245 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4675]: I1121 13:33:03.945438 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.047837 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.047881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.047891 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.047909 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.047921 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.150939 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.151202 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.151285 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.151361 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.151445 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.241850 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" event={"ID":"21162055-1a92-4e7b-9717-ce6462331212","Type":"ContainerStarted","Data":"d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf"} Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.245410 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" event={"ID":"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8","Type":"ContainerStarted","Data":"6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3"} Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.253407 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.253462 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.253478 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.253496 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.253511 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.356445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.356484 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.356492 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.356506 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.356515 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.459228 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.459271 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.459282 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.459299 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.459311 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.562615 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.562665 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.562677 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.562696 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.562712 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.665645 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.665729 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.665744 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.665770 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.665787 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.772109 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.772411 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.772506 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.772565 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.772597 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.848560 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:04 crc kubenswrapper[4675]: E1121 13:33:04.848760 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.879043 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.879120 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.879141 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.879167 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.879185 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.880095 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.912514 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.932994 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.946348 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.962874 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.977599 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.981115 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.981147 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.981157 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.981171 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.981180 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4675]: I1121 13:33:04.991189 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.003210 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.026022 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.042426 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.059848 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.074000 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.083641 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.083683 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.083692 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.083706 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.083715 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.086470 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.098274 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.111589 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.126375 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.143728 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.160688 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.185713 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.185745 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.185757 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.185782 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.185795 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.278153 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.288709 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.288738 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.288748 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.288764 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.288773 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.295491 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.318315 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.336441 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.347749 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.365859 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.377282 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.391427 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.391469 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.391477 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.391493 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.391506 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.398592 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.413180 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.424914 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.439263 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.452254 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.464195 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.475334 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.494787 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.494842 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.494855 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.494875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.494887 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.495429 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.508299 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.521964 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.535166 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.597324 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.597374 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.597385 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.597401 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.597413 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.700515 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.700566 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.700575 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.700592 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.700600 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.803058 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.803111 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.803121 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.803153 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.803162 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.848400 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.848465 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:05 crc kubenswrapper[4675]: E1121 13:33:05.848550 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:05 crc kubenswrapper[4675]: E1121 13:33:05.848670 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.849117 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:05 crc kubenswrapper[4675]: E1121 13:33:05.849196 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.905588 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.905640 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.905653 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.905673 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4675]: I1121 13:33:05.905685 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.008536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.008596 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.008621 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.008652 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.008673 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.111771 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.111814 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.111829 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.111850 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.111864 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.213854 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.213879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.213887 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.213901 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.213909 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.254966 4675 generic.go:334] "Generic (PLEG): container finished" podID="ee0f125b-1d69-4a42-9d1e-14f3673a1cb8" containerID="6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3" exitCode=0 Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.255005 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" event={"ID":"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8","Type":"ContainerDied","Data":"6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3"} Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.267399 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.288890 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.304329 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.315951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.316019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.316031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.316048 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.316060 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.316148 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.329804 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.345961 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.362420 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.376193 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.391123 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.404385 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.417962 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.418011 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.418023 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.418043 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.418057 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.418625 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.433852 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.454025 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.469142 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.481115 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.492311 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.507528 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.521310 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.521355 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.521368 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.521395 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.521410 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.522399 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.624366 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.624400 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.624408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.624423 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.624434 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.727543 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.727612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.727636 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.727665 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.727687 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.830526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.830607 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.830629 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.830664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.830689 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.849405 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:06 crc kubenswrapper[4675]: E1121 13:33:06.849568 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.933096 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.933141 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.933151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.933166 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.933207 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4675]: I1121 13:33:06.961952 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs\") pod \"network-metrics-daemon-djn7k\" (UID: \"3034a641-e8c3-4303-bb0e-1da29de3a41b\") " pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:06 crc kubenswrapper[4675]: E1121 13:33:06.962157 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:06 crc kubenswrapper[4675]: E1121 13:33:06.962230 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs podName:3034a641-e8c3-4303-bb0e-1da29de3a41b nodeName:}" failed. No retries permitted until 2025-11-21 13:33:14.962213569 +0000 UTC m=+71.688628296 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs") pod "network-metrics-daemon-djn7k" (UID: "3034a641-e8c3-4303-bb0e-1da29de3a41b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.035216 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.035480 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.035688 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.035864 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.035962 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.139227 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.139294 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.139313 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.139344 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.139362 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.242879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.242917 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.242929 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.242946 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.242958 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.262933 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" event={"ID":"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8","Type":"ContainerStarted","Data":"d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f"} Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.278434 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.288326 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.306437 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.318819 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.329431 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.339922 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.346147 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.346190 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.346200 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.346214 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.346224 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.349847 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.358997 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.369726 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.379490 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.387883 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.397980 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.410508 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.427312 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.446006 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.449177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.449225 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.449234 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.449249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.449259 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.457381 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.469121 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.486921 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:07Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.551768 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.551805 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.551815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.551830 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.551839 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.653922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.653958 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.653966 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.653979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.653988 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.755749 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.755793 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.755801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.755817 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.755828 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.848428 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.848497 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.848428 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:07 crc kubenswrapper[4675]: E1121 13:33:07.848559 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:07 crc kubenswrapper[4675]: E1121 13:33:07.848690 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:07 crc kubenswrapper[4675]: E1121 13:33:07.848855 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.858285 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.858328 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.858338 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.858354 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.858368 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.960410 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.960448 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.960456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.960471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4675]: I1121 13:33:07.960481 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.062879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.062924 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.062935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.062952 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.062965 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.164661 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.164703 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.164712 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.164725 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.164734 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.266460 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.266502 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.266514 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.266527 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.266539 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.368787 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.369151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.369160 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.369176 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.369184 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.471398 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.471443 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.471456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.471479 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.471494 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.506701 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.506744 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.506755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.506773 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.506785 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4675]: E1121 13:33:08.523227 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:08Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.526679 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.526705 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.526716 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.526734 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.526745 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4675]: E1121 13:33:08.539728 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:08Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.542914 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.542949 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.542959 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.542975 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.542983 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4675]: E1121 13:33:08.553709 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:08Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.559501 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.559556 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.559567 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.559590 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.559629 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4675]: E1121 13:33:08.571003 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:08Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.574629 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.574671 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.574682 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.574698 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.574710 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4675]: E1121 13:33:08.586099 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:08Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:08 crc kubenswrapper[4675]: E1121 13:33:08.586254 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.588037 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.588113 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.588138 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.588179 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.588192 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.691249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.691308 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.691325 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.691344 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.691355 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.794417 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.794480 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.794504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.794539 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.794562 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.848471 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:08 crc kubenswrapper[4675]: E1121 13:33:08.848654 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.897437 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.897497 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.897517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.897544 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4675]: I1121 13:33:08.897570 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.001831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.001882 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.001894 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.001914 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.001942 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.105873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.105927 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.105939 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.105957 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.105970 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.208189 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.208234 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.208252 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.208278 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.208296 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.271957 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/0.log" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.275942 4675 generic.go:334] "Generic (PLEG): container finished" podID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerID="094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1" exitCode=1 Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.275996 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerDied","Data":"094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1"} Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.276825 4675 scope.go:117] "RemoveContainer" containerID="094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.300467 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.312063 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.312209 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.312225 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.312252 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.312268 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.327649 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.345510 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.361934 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.376532 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.390196 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.413803 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:09Z\\\",\\\"message\\\":\\\" 1\\\\nI1121 13:33:08.636801 5865 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 13:33:08.636807 5865 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 13:33:08.636765 5865 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1121 13:33:08.637011 5865 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637180 5865 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637440 5865 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:33:08.637771 5865 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637817 5865 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:33:08.637890 5865 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.638245 5865 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.416148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.416220 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.416233 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.416255 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.416267 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.432108 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.445610 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.459347 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.473132 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.488744 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.501578 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.520284 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.520323 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.520335 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.520351 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.520362 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.531974 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.548352 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.561510 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.574524 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.587286 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.622796 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.622870 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.622944 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.622979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.623001 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.725670 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.725705 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.725716 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.725734 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.725747 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.827714 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.827753 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.827762 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.827777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.827787 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.848778 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.848838 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:09 crc kubenswrapper[4675]: E1121 13:33:09.848876 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.848791 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:09 crc kubenswrapper[4675]: E1121 13:33:09.848962 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:09 crc kubenswrapper[4675]: E1121 13:33:09.849034 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.930827 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.930875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.930885 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.930902 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4675]: I1121 13:33:09.930915 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.034271 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.034343 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.034358 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.034380 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.034399 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.152615 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.152682 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.152707 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.152739 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.152761 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.255780 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.255833 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.255849 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.255872 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.255890 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.283599 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/0.log" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.287262 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerStarted","Data":"f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4"} Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.288369 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.311101 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.328424 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.345797 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.359216 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.359256 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.359266 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.359282 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.359295 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.363607 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.388534 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:09Z\\\",\\\"message\\\":\\\" 1\\\\nI1121 13:33:08.636801 5865 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 13:33:08.636807 5865 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 13:33:08.636765 5865 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1121 13:33:08.637011 5865 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637180 5865 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637440 5865 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:33:08.637771 5865 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637817 5865 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:33:08.637890 5865 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.638245 5865 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.406914 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.418284 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.429747 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.444041 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.458583 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.461466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.461491 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.461500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.461516 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.461527 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.474551 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.485685 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.498632 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.511287 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.522883 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.539469 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.550899 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.564944 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.564983 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.564994 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.565024 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.565034 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.572439 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.667596 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.667665 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.667685 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.667709 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.667727 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.770194 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.770240 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.770255 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.770273 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.770286 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.848930 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:10 crc kubenswrapper[4675]: E1121 13:33:10.849264 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.871950 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.872001 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.872012 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.872028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.872039 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.974867 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.974931 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.974950 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.974980 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4675]: I1121 13:33:10.975003 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.077641 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.077698 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.077714 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.077735 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.077752 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.180401 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.180451 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.180465 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.180491 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.180504 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.282932 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.282973 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.282982 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.282999 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.283008 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.386233 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.386286 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.386296 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.386316 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.386332 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.489319 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.489362 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.489370 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.489392 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.489406 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.591925 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.591976 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.591985 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.592008 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.592022 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.695031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.695089 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.695100 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.695117 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.695127 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.797854 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.797899 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.797937 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.797955 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.797970 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.848058 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.848157 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:11 crc kubenswrapper[4675]: E1121 13:33:11.848232 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.848054 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:11 crc kubenswrapper[4675]: E1121 13:33:11.848421 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:11 crc kubenswrapper[4675]: E1121 13:33:11.848501 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.901350 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.901393 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.901405 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.901422 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4675]: I1121 13:33:11.901433 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.003329 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.003371 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.003379 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.003393 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.003403 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.105731 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.105767 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.105779 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.105794 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.105805 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.208467 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.208535 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.208555 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.208583 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.208607 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.224954 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.241459 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.252674 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.266364 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.279284 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.296320 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.309088 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.313590 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.313644 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.313655 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.313673 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.313685 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.333101 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:09Z\\\",\\\"message\\\":\\\" 1\\\\nI1121 13:33:08.636801 5865 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 13:33:08.636807 5865 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 13:33:08.636765 5865 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1121 13:33:08.637011 5865 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637180 5865 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637440 5865 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:33:08.637771 5865 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637817 5865 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:33:08.637890 5865 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.638245 5865 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.347266 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.357360 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.366642 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.378566 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.389334 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.402124 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.415564 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.415593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.415603 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.415619 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.415631 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.423525 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.438146 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.453068 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.465759 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.476612 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:12Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.518304 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.518348 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.518356 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.518373 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.518386 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.620873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.620917 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.620928 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.620947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.620958 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.726484 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.726552 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.726564 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.726583 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.726593 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.828639 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.829033 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.829048 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.829069 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.829116 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.849277 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:12 crc kubenswrapper[4675]: E1121 13:33:12.849402 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.931371 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.931426 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.931442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.931470 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4675]: I1121 13:33:12.931489 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.034492 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.034532 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.034545 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.034561 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.034574 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.138118 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.138159 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.138170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.138189 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.138202 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.241204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.241258 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.241275 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.241298 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.241315 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.298401 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/1.log" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.299083 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/0.log" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.301657 4675 generic.go:334] "Generic (PLEG): container finished" podID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerID="f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4" exitCode=1 Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.301709 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerDied","Data":"f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4"} Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.301756 4675 scope.go:117] "RemoveContainer" containerID="094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.302462 4675 scope.go:117] "RemoveContainer" containerID="f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4" Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.302606 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.316679 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.332520 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:09Z\\\",\\\"message\\\":\\\" 1\\\\nI1121 13:33:08.636801 5865 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 13:33:08.636807 5865 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 13:33:08.636765 5865 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1121 13:33:08.637011 5865 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637180 5865 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637440 5865 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:33:08.637771 5865 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637817 5865 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:33:08.637890 5865 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.638245 5865 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:13Z\\\",\\\"message\\\":\\\"g/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1121 13:33:11.573711 6238 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.068156ms\\\\nI1121 13:33:11.573942 6238 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1121 13:33:11.573984 6238 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1121 13:33:11.574022 6238 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1121 13:33:11.574181 6238 factory.go:1336] Added *v1.Node event handler 7\\\\nI1121 13:33:11.574229 6238 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1121 13:33:11.574640 6238 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:11.574741 6238 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:11.574798 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:11.574851 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:11.574955 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.343152 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.343200 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.343214 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.343233 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.343245 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.343921 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.353708 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.362432 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.371435 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.381078 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.391737 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.411469 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.424445 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.435312 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.446000 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.446098 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.446111 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.446129 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.446185 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.447150 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.460635 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.472913 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.484353 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.496723 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.506714 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.517198 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:13Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.548892 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.548925 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.548969 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.548993 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.549003 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.651054 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.651122 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.651142 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.651164 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.651179 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.731722 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.731828 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.731881 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.731917 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:33:45.731895648 +0000 UTC m=+102.458310375 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.731946 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.731955 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.731974 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.731999 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:33:45.73198955 +0000 UTC m=+102.458404277 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.732011 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.732092 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.732102 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.732110 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.732126 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.732130 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.732139 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.732115 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:33:45.732094083 +0000 UTC m=+102.458508870 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.732177 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:33:45.732167465 +0000 UTC m=+102.458582192 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.732193 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:33:45.732183205 +0000 UTC m=+102.458597932 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.754135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.754175 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.754185 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.754203 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.754214 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.848865 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.848934 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.848880 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.849175 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.849264 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:13 crc kubenswrapper[4675]: E1121 13:33:13.849340 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.856565 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.856613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.856624 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.856641 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.856653 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.959275 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.959330 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.959338 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.959352 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4675]: I1121 13:33:13.959361 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.061392 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.061432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.061443 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.061460 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.061469 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.163415 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.163444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.163454 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.163469 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.163478 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.265457 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.265503 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.265511 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.265525 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.265536 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.307070 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/1.log" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.368557 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.368599 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.368611 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.368629 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.368642 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.471437 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.471479 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.471491 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.471506 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.471515 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.574354 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.574390 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.574399 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.574416 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.574428 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.676517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.676557 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.676569 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.676585 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.676597 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.779487 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.779577 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.779593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.779610 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.779619 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.848256 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:14 crc kubenswrapper[4675]: E1121 13:33:14.848435 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.860949 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.871034 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.881811 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.881840 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.881848 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.881864 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.881872 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.884437 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.895631 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.914486 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://094508f55f17f6e58b7972dcfd12c6aff0c8f8168262b9b9474d629fa90e0ef1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:09Z\\\",\\\"message\\\":\\\" 1\\\\nI1121 13:33:08.636801 5865 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 13:33:08.636807 5865 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 13:33:08.636765 5865 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1121 13:33:08.637011 5865 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637180 5865 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637440 5865 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:33:08.637771 5865 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.637817 5865 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:33:08.637890 5865 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:33:08.638245 5865 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:13Z\\\",\\\"message\\\":\\\"g/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1121 13:33:11.573711 6238 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.068156ms\\\\nI1121 13:33:11.573942 6238 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1121 13:33:11.573984 6238 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1121 13:33:11.574022 6238 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1121 13:33:11.574181 6238 factory.go:1336] Added *v1.Node event handler 7\\\\nI1121 13:33:11.574229 6238 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1121 13:33:11.574640 6238 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:11.574741 6238 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:11.574798 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:11.574851 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:11.574955 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.928920 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.941249 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.952315 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.967039 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.978525 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.985169 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.985214 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.985226 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.985244 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.985262 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4675]: I1121 13:33:14.987946 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.006715 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.019462 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.030534 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.041107 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.052008 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.054326 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs\") pod \"network-metrics-daemon-djn7k\" (UID: \"3034a641-e8c3-4303-bb0e-1da29de3a41b\") " pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:15 crc kubenswrapper[4675]: E1121 13:33:15.054463 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:15 crc kubenswrapper[4675]: E1121 13:33:15.054515 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs podName:3034a641-e8c3-4303-bb0e-1da29de3a41b nodeName:}" failed. No retries permitted until 2025-11-21 13:33:31.054500924 +0000 UTC m=+87.780915651 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs") pod "network-metrics-daemon-djn7k" (UID: "3034a641-e8c3-4303-bb0e-1da29de3a41b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.063806 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.075964 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.087597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.087629 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.087638 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.087653 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.087662 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.189742 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.189801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.189811 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.189825 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.189837 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.292647 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.292688 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.292697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.292712 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.292722 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.395358 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.395392 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.395399 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.395413 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.395422 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.497548 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.497587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.497598 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.497614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.497626 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.600135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.600177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.600190 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.600214 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.600226 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.702833 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.702879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.702893 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.702909 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.702919 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.805164 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.805210 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.805222 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.805239 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.805252 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.848980 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.849047 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:15 crc kubenswrapper[4675]: E1121 13:33:15.849151 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:15 crc kubenswrapper[4675]: E1121 13:33:15.849303 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.849569 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:15 crc kubenswrapper[4675]: E1121 13:33:15.849649 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.907364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.907408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.907420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.907438 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4675]: I1121 13:33:15.907451 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.009584 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.009629 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.009646 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.009663 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.009672 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.111507 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.111541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.111550 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.111563 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.111572 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.213683 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.213721 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.213735 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.213751 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.213762 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.315195 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.315228 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.315238 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.315253 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.315265 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.417853 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.417907 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.417919 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.417938 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.417956 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.519965 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.520005 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.520017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.520031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.520041 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.622335 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.622370 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.622378 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.622390 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.622398 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.724560 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.724596 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.724611 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.724626 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.724634 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.827267 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.827304 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.827315 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.827332 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.827343 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.848818 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:16 crc kubenswrapper[4675]: E1121 13:33:16.848960 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.930100 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.930148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.930159 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.930174 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4675]: I1121 13:33:16.930186 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.033196 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.033230 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.033242 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.033257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.033269 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.136239 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.136283 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.136293 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.136310 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.136323 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.238646 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.238689 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.238700 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.238716 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.238727 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.340433 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.340462 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.340471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.340485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.340494 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.443339 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.443385 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.443396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.443428 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.443439 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.545368 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.545415 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.545427 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.545444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.545456 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.647517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.647551 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.647568 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.647583 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.647592 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.749208 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.749237 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.749244 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.749258 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.749265 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.848433 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:17 crc kubenswrapper[4675]: E1121 13:33:17.848571 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.848780 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:17 crc kubenswrapper[4675]: E1121 13:33:17.848848 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.848991 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:17 crc kubenswrapper[4675]: E1121 13:33:17.849057 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.851702 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.851746 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.851758 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.851776 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.851788 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.954683 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.954716 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.954727 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.954743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4675]: I1121 13:33:17.954755 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.056785 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.056819 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.056830 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.056846 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.056856 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.159240 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.159279 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.159287 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.159326 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.159336 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.261495 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.261540 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.261549 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.261564 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.261575 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.364910 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.364950 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.364962 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.364982 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.364994 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.466711 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.466779 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.466797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.466821 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.466838 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.569225 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.569292 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.569303 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.569321 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.569337 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.637414 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.637456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.637469 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.637489 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.637499 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4675]: E1121 13:33:18.648629 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:18Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.652251 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.652276 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.652286 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.652301 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.652312 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4675]: E1121 13:33:18.668908 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:18Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.674799 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.674825 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.674833 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.674864 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.674873 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4675]: E1121 13:33:18.687541 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:18Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.691150 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.691176 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.691186 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.691202 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.691212 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4675]: E1121 13:33:18.702870 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:18Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.707235 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.707301 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.707323 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.707359 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.707384 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4675]: E1121 13:33:18.723249 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:18Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:18 crc kubenswrapper[4675]: E1121 13:33:18.723452 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.725067 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.725104 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.725112 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.725128 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.725140 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.827432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.827464 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.827472 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.827486 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.827495 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.848981 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:18 crc kubenswrapper[4675]: E1121 13:33:18.849134 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.930921 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.930964 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.930976 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.930992 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4675]: I1121 13:33:18.931004 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.033568 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.033619 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.033635 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.033655 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.033670 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.135390 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.135417 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.135425 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.135438 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.135446 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.237573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.237612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.237621 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.237637 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.237646 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.340447 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.340490 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.340502 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.340518 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.340531 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.442588 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.442638 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.442650 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.442666 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.442680 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.546338 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.546376 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.546385 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.546398 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.546406 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.648111 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.648148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.648158 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.648174 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.648187 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.750870 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.750930 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.750946 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.750968 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.750983 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.848527 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.848637 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.848537 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:19 crc kubenswrapper[4675]: E1121 13:33:19.848688 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:19 crc kubenswrapper[4675]: E1121 13:33:19.848793 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:19 crc kubenswrapper[4675]: E1121 13:33:19.848861 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.853414 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.853446 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.853457 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.853473 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.853484 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.955504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.955550 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.955561 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.955577 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4675]: I1121 13:33:19.955591 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.059054 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.059178 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.059201 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.059232 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.059255 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.161448 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.161472 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.161480 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.161496 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.161504 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.263417 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.263456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.263466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.263483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.263495 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.366491 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.366519 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.366527 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.366541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.366549 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.468951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.468991 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.469002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.469018 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.469030 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.571144 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.571207 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.571216 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.571233 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.571243 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.674049 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.674113 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.674123 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.674137 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.674146 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.775856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.775896 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.775908 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.775931 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.775945 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.849176 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:20 crc kubenswrapper[4675]: E1121 13:33:20.849340 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.877887 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.877935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.877946 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.877961 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.877972 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.981026 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.981153 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.981164 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.981185 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4675]: I1121 13:33:20.981198 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.083454 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.083515 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.083524 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.083542 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.083552 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.186495 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.186546 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.186562 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.186598 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.186618 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.289496 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.289534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.289544 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.289560 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.289568 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.394806 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.395596 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.395635 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.395656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.395671 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.498163 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.498225 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.498235 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.498259 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.498272 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.600974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.601029 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.601039 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.601054 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.601079 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.703569 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.703625 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.703641 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.703666 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.703684 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.805862 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.805914 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.805944 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.805965 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.805979 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.848899 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.848957 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:21 crc kubenswrapper[4675]: E1121 13:33:21.849027 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:21 crc kubenswrapper[4675]: E1121 13:33:21.849169 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.848899 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:21 crc kubenswrapper[4675]: E1121 13:33:21.849276 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.909006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.909042 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.909055 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.909090 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4675]: I1121 13:33:21.909103 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.011717 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.011758 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.011769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.011783 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.011791 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.115451 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.115485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.115495 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.115534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.115547 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.218926 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.218967 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.218979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.218995 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.219004 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.321744 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.321792 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.321803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.321820 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.321832 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.424646 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.424686 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.424696 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.424713 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.424722 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.527141 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.527174 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.527182 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.527196 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.527204 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.629495 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.629529 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.629555 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.629569 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.629577 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.732031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.732110 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.732121 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.732135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.732144 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.834833 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.834878 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.834891 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.834908 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.834922 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.848159 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:22 crc kubenswrapper[4675]: E1121 13:33:22.848432 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.937220 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.937258 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.937284 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.937299 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4675]: I1121 13:33:22.937309 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.040200 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.040236 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.040245 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.040260 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.040269 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.142702 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.142743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.142753 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.142768 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.142777 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.244857 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.244894 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.244912 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.244928 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.244940 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.348623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.348664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.348672 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.348692 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.348701 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.451396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.451450 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.451466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.451493 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.451510 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.554855 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.554904 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.554915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.554930 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.554940 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.658028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.658114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.658128 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.658148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.658185 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.760789 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.760871 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.760885 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.760901 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.760910 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.848055 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:23 crc kubenswrapper[4675]: E1121 13:33:23.848292 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.848652 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:23 crc kubenswrapper[4675]: E1121 13:33:23.848781 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.850241 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:23 crc kubenswrapper[4675]: E1121 13:33:23.850342 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.853371 4675 scope.go:117] "RemoveContainer" containerID="f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.864208 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.864238 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.864246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.864261 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.864271 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.864572 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.878362 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.893649 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.905492 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.916920 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.929541 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.946373 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:13Z\\\",\\\"message\\\":\\\"g/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1121 13:33:11.573711 6238 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.068156ms\\\\nI1121 13:33:11.573942 6238 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1121 13:33:11.573984 6238 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1121 13:33:11.574022 6238 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1121 13:33:11.574181 6238 factory.go:1336] Added *v1.Node event handler 7\\\\nI1121 13:33:11.574229 6238 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1121 13:33:11.574640 6238 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:11.574741 6238 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:11.574798 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:11.574851 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:11.574955 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.960279 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.965845 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.965875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.965883 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.965903 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.965922 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.971912 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.981146 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4675]: I1121 13:33:23.991346 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.002419 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.019808 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.032547 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.042994 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.054033 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.065450 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.068585 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.068624 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.068635 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.068671 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.068684 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.076250 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.171504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.171555 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.171567 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.171587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.171600 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.273808 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.273863 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.273881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.273906 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.273924 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.338025 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/1.log" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.340554 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerStarted","Data":"5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb"} Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.341160 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.356117 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.366938 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.375777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.375997 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.376129 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.376262 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.376392 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.386662 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.397971 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.407470 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.417712 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.429086 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.449536 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:13Z\\\",\\\"message\\\":\\\"g/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1121 13:33:11.573711 6238 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.068156ms\\\\nI1121 13:33:11.573942 6238 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1121 13:33:11.573984 6238 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1121 13:33:11.574022 6238 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1121 13:33:11.574181 6238 factory.go:1336] Added *v1.Node event handler 7\\\\nI1121 13:33:11.574229 6238 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1121 13:33:11.574640 6238 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:11.574741 6238 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:11.574798 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:11.574851 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:11.574955 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.469554 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.479780 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.479861 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.479891 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.479926 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.479952 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.483579 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.500048 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.518874 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.536854 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.550504 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.583757 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.584991 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.585019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.585030 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.585044 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.585081 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.599829 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.613502 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.623509 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.687408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.687436 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.687444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.687457 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.687467 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.790372 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.790448 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.790472 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.790501 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.790522 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.848421 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:24 crc kubenswrapper[4675]: E1121 13:33:24.848583 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.871832 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.893785 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.893829 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.893847 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.893870 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.893889 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.904054 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.930377 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:13Z\\\",\\\"message\\\":\\\"g/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1121 13:33:11.573711 6238 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.068156ms\\\\nI1121 13:33:11.573942 6238 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1121 13:33:11.573984 6238 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1121 13:33:11.574022 6238 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1121 13:33:11.574181 6238 factory.go:1336] Added *v1.Node event handler 7\\\\nI1121 13:33:11.574229 6238 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1121 13:33:11.574640 6238 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:11.574741 6238 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:11.574798 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:11.574851 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:11.574955 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.943562 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.952298 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.961450 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.971844 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.981361 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.990994 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.995743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.995786 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.995796 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.995812 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4675]: I1121 13:33:24.995822 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.009164 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:25Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.021423 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:25Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.033178 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:25Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.050486 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:25Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.062877 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:25Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.073662 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:25Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.083120 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:25Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.094173 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:25Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.097341 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.097370 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.097382 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.097398 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.097407 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.106417 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:25Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.200163 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.200446 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.200456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.200469 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.200478 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.303354 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.303391 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.303402 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.303419 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.303431 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.405428 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.405456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.405465 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.405483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.405494 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.508800 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.508859 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.508881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.508910 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.508931 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.611801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.611829 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.611854 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.611878 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.611891 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.714261 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.714303 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.714312 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.714326 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.714334 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.816882 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.816925 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.816936 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.816952 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.816963 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.848902 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.848946 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:25 crc kubenswrapper[4675]: E1121 13:33:25.849050 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.849149 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:25 crc kubenswrapper[4675]: E1121 13:33:25.849312 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:25 crc kubenswrapper[4675]: E1121 13:33:25.849352 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.923800 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.923854 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.923871 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.923895 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4675]: I1121 13:33:25.923912 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.026341 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.026383 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.026391 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.026405 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.026414 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.129339 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.129414 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.129436 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.129465 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.129523 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.231915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.231981 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.231996 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.232017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.232031 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.335807 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.335853 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.335866 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.335887 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.335899 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.349752 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/2.log" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.351025 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/1.log" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.355165 4675 generic.go:334] "Generic (PLEG): container finished" podID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerID="5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb" exitCode=1 Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.355222 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerDied","Data":"5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb"} Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.355280 4675 scope.go:117] "RemoveContainer" containerID="f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.356054 4675 scope.go:117] "RemoveContainer" containerID="5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb" Nov 21 13:33:26 crc kubenswrapper[4675]: E1121 13:33:26.356209 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.380061 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.401942 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.418535 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.435483 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.438662 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.438726 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.438743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.438768 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.438785 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.450797 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.460990 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.472768 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.485570 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.498915 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.511721 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.527433 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.541622 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.541678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.541696 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.541721 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.541738 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.543153 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.564563 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:13Z\\\",\\\"message\\\":\\\"g/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1121 13:33:11.573711 6238 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.068156ms\\\\nI1121 13:33:11.573942 6238 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1121 13:33:11.573984 6238 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1121 13:33:11.574022 6238 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1121 13:33:11.574181 6238 factory.go:1336] Added *v1.Node event handler 7\\\\nI1121 13:33:11.574229 6238 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1121 13:33:11.574640 6238 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:11.574741 6238 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:11.574798 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:11.574851 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:11.574955 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:25Z\\\",\\\"message\\\":\\\"Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1121 13:33:25.270440 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.579714 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.591921 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.604580 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.621423 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.636213 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.644173 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.644206 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.644215 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.644231 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.644241 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.747246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.747303 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.747321 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.747344 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.747362 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.848211 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:26 crc kubenswrapper[4675]: E1121 13:33:26.848347 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.850131 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.850191 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.850207 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.850227 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.850239 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.953116 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.953161 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.953170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.953183 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4675]: I1121 13:33:26.953194 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.055232 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.055280 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.055292 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.055311 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.055325 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.158134 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.158532 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.158554 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.158577 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.158595 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.260522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.260564 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.260575 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.260591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.260602 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.363120 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/2.log" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.363450 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.363478 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.363486 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.363501 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.363510 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.466097 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.466282 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.466298 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.466315 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.466327 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.569656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.569707 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.569719 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.569738 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.569750 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.672515 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.672547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.672556 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.672569 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.672579 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.774667 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.774713 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.774724 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.774741 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.774752 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.848876 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.849052 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:27 crc kubenswrapper[4675]: E1121 13:33:27.849253 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.849353 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:27 crc kubenswrapper[4675]: E1121 13:33:27.849511 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:27 crc kubenswrapper[4675]: E1121 13:33:27.849618 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.877522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.877580 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.877594 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.877615 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.877633 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.980598 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.980651 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.980670 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.980695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4675]: I1121 13:33:27.980713 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.083811 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.083860 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.083872 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.083892 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.083905 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.187375 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.187442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.187466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.187494 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.187514 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.290584 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.290644 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.290660 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.290682 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.290697 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.393855 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.393906 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.393925 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.393970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.394007 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.496952 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.497215 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.497249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.497350 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.497429 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.599948 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.600006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.600023 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.600047 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.600091 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.702881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.702948 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.702970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.702999 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.703023 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.806576 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.806665 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.806684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.806716 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.806744 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.848228 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:28 crc kubenswrapper[4675]: E1121 13:33:28.848411 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.909151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.909193 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.909204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.909222 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.909233 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.941485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.941536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.941547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.941563 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.941577 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4675]: E1121 13:33:28.953249 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.957594 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.957651 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.957666 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.957690 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.957705 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4675]: E1121 13:33:28.969203 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.973650 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.973715 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.973732 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.973759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.973779 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4675]: E1121 13:33:28.991545 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.995529 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.995570 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.995588 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.995609 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4675]: I1121 13:33:28.995627 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4675]: E1121 13:33:29.008862 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:29Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.012887 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.012924 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.012933 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.012949 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.012958 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4675]: E1121 13:33:29.023000 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:29Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:29 crc kubenswrapper[4675]: E1121 13:33:29.023146 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.024593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.024650 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.024667 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.024690 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.024709 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.127703 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.127755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.127773 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.127797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.127814 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.231286 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.231326 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.231334 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.231351 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.231361 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.334209 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.334245 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.334256 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.334272 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.334283 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.437911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.437958 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.437971 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.437988 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.438001 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.540663 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.540707 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.540722 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.540743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.540758 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.644221 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.644582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.644632 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.644666 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.644689 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.748629 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.748715 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.748725 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.748743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.748755 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.848350 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.848496 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:29 crc kubenswrapper[4675]: E1121 13:33:29.848536 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.848617 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:29 crc kubenswrapper[4675]: E1121 13:33:29.848790 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:29 crc kubenswrapper[4675]: E1121 13:33:29.848982 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.851449 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.851516 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.851528 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.851546 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.851561 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.955648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.955798 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.955828 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.955855 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4675]: I1121 13:33:29.955912 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.059257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.059309 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.059327 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.059353 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.059373 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.162363 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.162393 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.162402 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.162416 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.162425 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.264981 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.265032 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.265048 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.265096 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.265113 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.367665 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.367740 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.367760 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.367789 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.367809 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.470147 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.470266 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.470285 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.470309 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.470326 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.576056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.576195 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.576206 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.576224 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.576241 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.679715 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.679750 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.679761 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.679775 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.679785 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.782241 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.782299 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.782308 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.782324 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.782333 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.848088 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:30 crc kubenswrapper[4675]: E1121 13:33:30.848254 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.885701 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.885760 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.885778 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.885800 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.885816 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.988000 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.988053 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.988097 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.988123 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4675]: I1121 13:33:30.988141 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.090266 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.090331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.090349 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.090373 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.090390 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.130947 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs\") pod \"network-metrics-daemon-djn7k\" (UID: \"3034a641-e8c3-4303-bb0e-1da29de3a41b\") " pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:31 crc kubenswrapper[4675]: E1121 13:33:31.131148 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:31 crc kubenswrapper[4675]: E1121 13:33:31.131240 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs podName:3034a641-e8c3-4303-bb0e-1da29de3a41b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:03.131217452 +0000 UTC m=+119.857632189 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs") pod "network-metrics-daemon-djn7k" (UID: "3034a641-e8c3-4303-bb0e-1da29de3a41b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.193031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.193089 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.193100 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.193115 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.193126 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.296004 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.296056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.296090 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.296109 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.296121 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.399789 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.399888 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.399906 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.399932 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.399949 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.502345 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.502426 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.502448 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.502483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.502507 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.605924 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.605970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.605984 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.606003 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.606041 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.708917 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.708957 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.708972 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.708993 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.709009 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.811369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.811432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.811446 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.811467 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.811481 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.848762 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.848829 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:31 crc kubenswrapper[4675]: E1121 13:33:31.848887 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.848838 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:31 crc kubenswrapper[4675]: E1121 13:33:31.849018 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:31 crc kubenswrapper[4675]: E1121 13:33:31.849124 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.914453 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.914496 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.914506 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.914551 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4675]: I1121 13:33:31.914563 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.017583 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.017705 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.017718 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.017735 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.017772 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.119929 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.119985 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.120000 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.120024 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.120043 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.221977 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.222021 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.222047 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.222094 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.222112 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.324251 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.324289 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.324297 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.324312 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.324322 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.427745 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.427796 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.427806 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.427823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.427835 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.531461 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.531541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.531566 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.531595 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.531615 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.634871 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.634931 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.634945 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.634962 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.634974 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.737440 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.737484 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.737493 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.737510 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.737523 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.840448 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.840510 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.840530 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.840556 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.840573 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.848050 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:32 crc kubenswrapper[4675]: E1121 13:33:32.848301 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.944541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.944609 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.944633 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.944665 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4675]: I1121 13:33:32.944690 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.047130 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.047177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.047188 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.047204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.047214 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.149899 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.149936 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.149945 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.149960 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.149971 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.252344 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.252385 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.252396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.252414 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.252425 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.354875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.354919 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.354931 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.354948 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.354960 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.457084 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.457135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.457145 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.457160 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.457171 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.559733 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.559776 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.559786 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.559802 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.559812 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.662513 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.662584 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.662609 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.662639 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.662662 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.765252 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.765301 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.765314 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.765332 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.765345 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.848430 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.848505 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:33 crc kubenswrapper[4675]: E1121 13:33:33.848598 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:33 crc kubenswrapper[4675]: E1121 13:33:33.848692 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.849104 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:33 crc kubenswrapper[4675]: E1121 13:33:33.849309 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.867630 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.867677 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.867693 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.867713 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.867724 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.970557 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.970590 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.970598 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.970615 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4675]: I1121 13:33:33.970624 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.072328 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.072378 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.072390 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.072407 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.072420 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.175004 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.175301 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.175393 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.175488 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.175578 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.277477 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.277524 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.277539 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.277580 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.277604 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.379823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.379885 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.379918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.379937 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.379947 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.482494 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.482555 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.482580 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.482614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.482636 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.585381 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.585435 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.585444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.585461 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.585471 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.688282 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.688346 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.688369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.688399 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.688421 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.791797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.791845 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.791857 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.791875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.791885 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.848204 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:34 crc kubenswrapper[4675]: E1121 13:33:34.848354 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.861354 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.874497 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.888403 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.894697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.894744 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.894757 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.894776 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.894788 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.898916 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.912639 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.928978 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.946845 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6cd66612468bc44aea9f9c81613cd4a804d5d0d8a7682c3f07c958fa6404cc4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:13Z\\\",\\\"message\\\":\\\"g/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI1121 13:33:11.573711 6238 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 1.068156ms\\\\nI1121 13:33:11.573942 6238 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1121 13:33:11.573984 6238 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1121 13:33:11.574022 6238 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1121 13:33:11.574181 6238 factory.go:1336] Added *v1.Node event handler 7\\\\nI1121 13:33:11.574229 6238 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1121 13:33:11.574640 6238 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:11.574741 6238 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:11.574798 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:11.574851 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:11.574955 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:25Z\\\",\\\"message\\\":\\\"Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1121 13:33:25.270440 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.959426 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.969431 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.979724 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.991962 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.997452 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.997485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.997518 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.997535 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4675]: I1121 13:33:34.997546 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.005417 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.048475 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.068762 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.081950 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.093367 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.100026 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.100084 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.100095 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.100112 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.100123 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.103584 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.113023 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.202057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.202122 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.202135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.202151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.202164 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.303875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.303916 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.303925 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.303941 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.303949 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.406409 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.406460 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.406476 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.406498 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.406514 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.509669 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.509745 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.509769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.509799 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.509822 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.612753 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.612815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.612827 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.612850 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.612863 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.715837 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.715891 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.715906 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.715929 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.715943 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.817616 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.817691 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.817702 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.817719 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.817730 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.848705 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.848759 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.848816 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:35 crc kubenswrapper[4675]: E1121 13:33:35.848899 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:35 crc kubenswrapper[4675]: E1121 13:33:35.849020 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:35 crc kubenswrapper[4675]: E1121 13:33:35.849234 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.919888 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.919934 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.919945 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.919965 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4675]: I1121 13:33:35.919976 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.022623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.022659 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.022671 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.022685 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.022697 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.124919 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.124965 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.124981 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.124998 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.125010 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.227518 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.227557 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.227566 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.227580 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.227591 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.330323 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.330369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.330381 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.330399 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.330411 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.433028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.433092 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.433102 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.433122 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.433134 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.535789 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.535822 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.535833 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.535848 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.535857 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.638803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.638878 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.638888 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.638901 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.638909 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.741726 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.741779 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.741788 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.741833 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.741851 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.844240 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.844308 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.844319 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.844336 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.844347 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.848753 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:36 crc kubenswrapper[4675]: E1121 13:33:36.848912 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.947011 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.947062 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.947092 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.947108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4675]: I1121 13:33:36.947120 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.049567 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.049603 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.049612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.049627 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.049637 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.152382 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.152429 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.152441 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.152461 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.152474 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.254967 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.255028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.255043 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.255061 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.255103 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.357574 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.357643 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.357654 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.357672 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.357682 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.460498 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.460545 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.460573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.460591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.460602 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.562366 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.562402 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.562411 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.562424 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.562433 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.664622 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.664666 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.664678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.664695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.664706 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.767541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.767613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.767632 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.767657 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.767676 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.848743 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.848811 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.848845 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:37 crc kubenswrapper[4675]: E1121 13:33:37.848988 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:37 crc kubenswrapper[4675]: E1121 13:33:37.849136 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:37 crc kubenswrapper[4675]: E1121 13:33:37.849233 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.871114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.871170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.871184 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.871200 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.871211 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.974380 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.974450 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.974464 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.974486 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4675]: I1121 13:33:37.974503 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.077041 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.077167 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.077187 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.077214 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.077233 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.180763 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.180827 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.180842 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.180866 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.180885 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.283715 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.283797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.283821 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.283849 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.283871 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.386022 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.386088 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.386103 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.386119 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.386131 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.488746 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.488823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.488849 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.488880 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.488901 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.592365 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.592401 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.592412 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.592428 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.592437 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.695851 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.695918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.695942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.695970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.695990 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.798905 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.798990 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.799014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.799042 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.799064 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.847924 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:38 crc kubenswrapper[4675]: E1121 13:33:38.848137 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.901709 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.901756 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.901767 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.901786 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4675]: I1121 13:33:38.901798 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.004215 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.004264 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.004275 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.004292 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.004302 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.107769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.107826 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.107834 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.107850 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.107859 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.210144 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.210206 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.210220 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.210240 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.210252 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.248146 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.248234 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.248246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.248272 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.248288 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: E1121 13:33:39.263706 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.268710 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.268747 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.268759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.268779 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.268792 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: E1121 13:33:39.285016 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.290471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.290531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.290549 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.290582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.290597 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: E1121 13:33:39.303056 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.307205 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.307248 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.307261 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.307280 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.307295 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: E1121 13:33:39.322359 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.327614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.327666 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.327676 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.327699 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.327712 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: E1121 13:33:39.340771 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:39 crc kubenswrapper[4675]: E1121 13:33:39.340931 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.342780 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.342819 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.342833 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.342854 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.342871 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.445822 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.445905 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.446002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.446148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.446233 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.549631 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.549690 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.549712 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.549746 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.549765 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.653495 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.653546 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.653558 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.653576 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.653594 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.755929 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.756008 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.756028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.756050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.756063 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.847991 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:39 crc kubenswrapper[4675]: E1121 13:33:39.848195 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.848025 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.848006 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:39 crc kubenswrapper[4675]: E1121 13:33:39.849435 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:39 crc kubenswrapper[4675]: E1121 13:33:39.849531 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.849633 4675 scope.go:117] "RemoveContainer" containerID="5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb" Nov 21 13:33:39 crc kubenswrapper[4675]: E1121 13:33:39.850434 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.859336 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.859371 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.859381 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.859396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.859409 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.867783 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.880604 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.897786 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.910000 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.923428 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.936487 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.953470 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.961549 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.961594 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.961606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.961624 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.961634 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.967044 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.976345 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:39 crc kubenswrapper[4675]: I1121 13:33:39.987409 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:39.999816 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.010207 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.022551 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.034707 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.050092 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:25Z\\\",\\\"message\\\":\\\"Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1121 13:33:25.270440 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.061663 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.063473 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.063497 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.063505 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.063519 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.063527 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.070037 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.079680 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.166210 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.166290 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.166315 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.166359 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.166385 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.269242 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.269309 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.269318 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.269340 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.269352 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.372670 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.372711 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.372721 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.372738 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.372749 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.410710 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsw5h_455c5b5a-917d-4361-bcc0-9283ffce0e86/kube-multus/0.log" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.410779 4675 generic.go:334] "Generic (PLEG): container finished" podID="455c5b5a-917d-4361-bcc0-9283ffce0e86" containerID="27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9" exitCode=1 Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.410813 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsw5h" event={"ID":"455c5b5a-917d-4361-bcc0-9283ffce0e86","Type":"ContainerDied","Data":"27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9"} Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.411291 4675 scope.go:117] "RemoveContainer" containerID="27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.423038 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.439431 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.452838 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.475194 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:25Z\\\",\\\"message\\\":\\\"Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1121 13:33:25.270440 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.475421 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.477179 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.477208 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.477234 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.477250 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.497931 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.515462 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.530142 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.542483 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.557715 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.570782 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.582519 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.582586 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.582604 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.582633 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.582652 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.596398 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.614647 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.627871 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.641339 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.654151 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.666893 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.678547 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887\\\\n2025-11-21T13:32:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887 to /host/opt/cni/bin/\\\\n2025-11-21T13:32:54Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:54Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.685478 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.685535 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.685558 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.685586 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.685608 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.691610 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.788119 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.788165 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.788175 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.788193 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.788202 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.848933 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:40 crc kubenswrapper[4675]: E1121 13:33:40.849178 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.864283 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.890697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.890733 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.890741 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.890756 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.890765 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.993225 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.993265 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.993276 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.993293 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4675]: I1121 13:33:40.993305 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.095682 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.095954 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.096039 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.096153 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.096216 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.199030 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.199084 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.199096 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.199112 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.199122 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.301700 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.301738 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.301769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.301785 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.301797 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.404151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.404210 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.404219 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.404235 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.404245 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.417323 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsw5h_455c5b5a-917d-4361-bcc0-9283ffce0e86/kube-multus/0.log" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.417443 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsw5h" event={"ID":"455c5b5a-917d-4361-bcc0-9283ffce0e86","Type":"ContainerStarted","Data":"73609a4ba983edbb274ecf46c6f71f349a32f5aac48fc5b0c24a1acdd05e6674"} Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.437603 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.454386 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.469878 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f630620-45f6-4ce2-b3bb-5ec3e6e758b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabe363fffb90ca65ad0cac2d28d56ec44596ac7f25e6eb9669a3ba9b6b61369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.500934 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.507742 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.507807 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.507823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.507843 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.507857 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.517009 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.535808 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.549835 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.571401 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.587950 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.608783 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73609a4ba983edbb274ecf46c6f71f349a32f5aac48fc5b0c24a1acdd05e6674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887\\\\n2025-11-21T13:32:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887 to /host/opt/cni/bin/\\\\n2025-11-21T13:32:54Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:54Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.610597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.610633 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.610644 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.610660 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.610672 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.622002 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.637516 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.648488 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.661006 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.683214 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:25Z\\\",\\\"message\\\":\\\"Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1121 13:33:25.270440 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.703146 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.712710 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.712750 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.712951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.712968 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.712981 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.715459 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.734863 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.750813 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.815615 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.815667 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.815678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.815697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.815711 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.849022 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.849022 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:41 crc kubenswrapper[4675]: E1121 13:33:41.849204 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:41 crc kubenswrapper[4675]: E1121 13:33:41.849353 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.849030 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:41 crc kubenswrapper[4675]: E1121 13:33:41.849591 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.918846 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.918911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.918930 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.918952 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4675]: I1121 13:33:41.918973 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.021967 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.022023 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.022046 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.022116 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.022140 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.125153 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.125187 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.125196 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.125209 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.125217 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.228141 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.228202 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.228223 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.228253 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.228275 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.331133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.331201 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.331225 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.331256 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.331279 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.434371 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.434459 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.434484 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.434515 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.434535 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.536615 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.536702 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.536739 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.536776 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.536797 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.640485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.640588 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.640612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.640644 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.640665 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.742775 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.742807 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.742817 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.742830 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.742839 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.845843 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.845903 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.845918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.845941 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.845957 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.848292 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:42 crc kubenswrapper[4675]: E1121 13:33:42.848459 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.948426 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.948475 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.948487 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.948505 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4675]: I1121 13:33:42.948517 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.051118 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.051165 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.051176 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.051191 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.051203 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.154205 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.154271 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.154288 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.154312 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.154329 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.256578 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.256661 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.256695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.256725 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.256747 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.359621 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.359700 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.359722 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.359745 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.359800 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.462751 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.462804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.462821 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.462860 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.462878 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.565778 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.565831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.565849 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.565877 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.565900 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.670116 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.670200 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.670232 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.670258 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.670279 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.774341 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.774402 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.774419 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.774447 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.774465 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.848213 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.848318 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:43 crc kubenswrapper[4675]: E1121 13:33:43.848385 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.848213 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:43 crc kubenswrapper[4675]: E1121 13:33:43.848508 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:43 crc kubenswrapper[4675]: E1121 13:33:43.849248 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.877627 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.877710 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.877734 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.877765 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.877791 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.980968 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.981051 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.981062 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.981104 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4675]: I1121 13:33:43.981117 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.084412 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.084468 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.084487 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.084526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.084544 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.187567 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.187648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.187673 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.187706 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.187731 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.290647 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.290695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.290710 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.290731 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.290746 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.393141 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.393203 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.393219 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.393246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.393262 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.496700 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.497020 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.497037 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.497057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.497086 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.599591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.599635 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.599645 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.599662 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.599672 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.702418 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.702454 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.702466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.702483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.702495 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.805942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.806021 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.806047 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.806176 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.806261 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.848395 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:44 crc kubenswrapper[4675]: E1121 13:33:44.849314 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.864334 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.876179 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.889436 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73609a4ba983edbb274ecf46c6f71f349a32f5aac48fc5b0c24a1acdd05e6674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887\\\\n2025-11-21T13:32:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887 to /host/opt/cni/bin/\\\\n2025-11-21T13:32:54Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:54Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.899990 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.910293 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.910337 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.910348 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.910363 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.910373 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.916617 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.929771 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.949384 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:25Z\\\",\\\"message\\\":\\\"Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1121 13:33:25.270440 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.963027 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.976028 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:44 crc kubenswrapper[4675]: I1121 13:33:44.987363 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.001424 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.012050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.012096 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.012104 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.012118 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.012128 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.014031 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.023577 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f630620-45f6-4ce2-b3bb-5ec3e6e758b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabe363fffb90ca65ad0cac2d28d56ec44596ac7f25e6eb9669a3ba9b6b61369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.040651 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.052660 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.063945 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.073833 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.084603 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.094719 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.114696 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.114732 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.114743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.114759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.114769 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.217726 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.217777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.217789 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.217807 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.217820 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.320453 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.320502 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.320519 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.320542 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.320558 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.423029 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.423079 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.423090 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.423106 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.423118 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.525404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.525482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.525506 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.525536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.525559 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.628140 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.628186 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.628199 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.628217 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.628232 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.730603 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.730646 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.730656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.730671 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.730680 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.801498 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.801681 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.801747 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:49.801706257 +0000 UTC m=+166.528120984 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.801781 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.801839 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.801857 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:34:49.80183538 +0000 UTC m=+166.528250267 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.801892 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.801957 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.802004 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.802030 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.802142 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.802178 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.802203 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.802218 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.802216 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.802191 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:34:49.802180169 +0000 UTC m=+166.528594906 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.802314 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:34:49.802295412 +0000 UTC m=+166.528710279 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.802339 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:34:49.802326342 +0000 UTC m=+166.528741289 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.833212 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.833265 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.833278 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.833300 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.833313 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.848511 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.848596 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.848677 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.848624 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.848780 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:45 crc kubenswrapper[4675]: E1121 13:33:45.848867 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.935968 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.936026 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.936037 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.936060 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4675]: I1121 13:33:45.936101 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.037866 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.037901 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.037909 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.037925 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.037935 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.141174 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.141235 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.141244 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.141264 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.141276 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.243984 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.244042 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.244056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.244085 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.244097 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.347410 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.347452 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.347465 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.347485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.347498 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.450573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.450643 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.450656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.450673 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.450687 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.554144 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.554201 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.554215 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.554237 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.554253 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.656734 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.656772 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.656787 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.656804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.656815 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.759214 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.759247 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.759255 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.759268 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.759276 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.848609 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:46 crc kubenswrapper[4675]: E1121 13:33:46.848763 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.861653 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.861734 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.861760 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.861776 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.861784 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.965241 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.965292 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.965303 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.965321 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4675]: I1121 13:33:46.965336 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.068428 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.068529 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.068561 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.068597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.068624 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.171484 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.171530 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.171539 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.171553 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.171561 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.274194 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.274270 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.274287 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.274308 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.274352 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.376490 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.376540 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.376550 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.376567 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.376581 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.479974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.480006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.480016 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.480033 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.480044 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.583754 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.583823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.583844 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.583880 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.583914 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.686927 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.686992 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.687010 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.687039 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.687057 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.790862 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.790953 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.790978 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.791012 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.791034 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.848447 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.848506 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.848604 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:47 crc kubenswrapper[4675]: E1121 13:33:47.848672 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:47 crc kubenswrapper[4675]: E1121 13:33:47.848828 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:47 crc kubenswrapper[4675]: E1121 13:33:47.848989 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.893696 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.893737 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.893751 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.893768 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.893779 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.997278 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.997331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.997348 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.997372 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4675]: I1121 13:33:47.997389 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.100587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.100652 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.100671 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.100692 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.100705 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.203465 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.203510 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.203522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.203540 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.203551 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.307170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.307259 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.307285 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.307318 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.307341 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.409706 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.409796 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.409815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.409837 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.409853 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.512473 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.512531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.512547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.512569 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.512585 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.615030 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.615092 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.615110 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.615126 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.615137 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.718189 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.718248 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.718272 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.718300 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.718363 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.820916 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.820970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.820981 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.820999 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.821012 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.848891 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:48 crc kubenswrapper[4675]: E1121 13:33:48.849052 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.924256 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.924294 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.924302 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.924315 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4675]: I1121 13:33:48.924324 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.027332 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.027758 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.027789 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.027820 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.027840 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.130457 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.130522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.130544 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.130572 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.130593 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.232585 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.232622 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.232630 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.232644 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.232652 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.335190 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.335226 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.335234 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.335248 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.335256 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.382467 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.382534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.382550 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.382570 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.382585 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4675]: E1121 13:33:49.401297 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:49Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.405136 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.405188 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.405206 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.405230 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.405247 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4675]: E1121 13:33:49.422644 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:49Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.426617 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.426656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.426671 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.426695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.426713 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4675]: E1121 13:33:49.444662 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:49Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.450121 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.450164 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.450177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.450196 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.450211 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4675]: E1121 13:33:49.467458 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:49Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.471591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.471632 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.471646 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.471664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.471676 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4675]: E1121 13:33:49.493449 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:49Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:49 crc kubenswrapper[4675]: E1121 13:33:49.493717 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.495645 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.495690 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.495703 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.495722 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.495735 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.599293 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.599334 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.599344 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.599360 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.599373 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.701926 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.701958 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.701969 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.701983 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.701993 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.805497 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.805567 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.805584 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.805609 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.805626 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.848320 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.848406 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:49 crc kubenswrapper[4675]: E1121 13:33:49.848553 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.848579 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:49 crc kubenswrapper[4675]: E1121 13:33:49.848625 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:49 crc kubenswrapper[4675]: E1121 13:33:49.848431 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.907517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.907772 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.907786 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.907803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4675]: I1121 13:33:49.907813 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.010203 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.010238 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.010247 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.010261 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.010270 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.112848 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.112905 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.112927 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.112957 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.112979 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.216482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.216525 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.216541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.216567 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.216583 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.319164 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.319200 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.319208 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.319222 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.319232 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.421757 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.421807 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.421819 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.421838 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.421849 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.525318 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.525402 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.525427 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.525455 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.525471 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.628347 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.628418 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.628441 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.628470 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.628492 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.730364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.730394 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.730403 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.730416 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.730424 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.833243 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.833291 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.833300 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.833318 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.833327 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.848568 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:50 crc kubenswrapper[4675]: E1121 13:33:50.848699 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.935687 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.935755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.935774 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.935802 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4675]: I1121 13:33:50.935820 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.039315 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.039382 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.039396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.039418 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.039433 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.142536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.142602 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.142620 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.142642 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.142656 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.245592 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.245656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.245672 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.245695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.245710 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.348087 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.348121 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.348129 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.348144 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.348154 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.450874 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.450926 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.450941 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.450963 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.450979 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.553452 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.553489 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.553497 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.553512 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.553520 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.655732 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.655772 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.655782 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.655795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.655803 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.758492 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.758559 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.758572 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.758593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.758605 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.848631 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.848719 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.848837 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:51 crc kubenswrapper[4675]: E1121 13:33:51.848869 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:51 crc kubenswrapper[4675]: E1121 13:33:51.848955 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:51 crc kubenswrapper[4675]: E1121 13:33:51.849021 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.860452 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.860522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.860545 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.860576 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.860598 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.963820 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.963919 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.963947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.963979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4675]: I1121 13:33:51.964003 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.066016 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.066125 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.066144 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.066186 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.066205 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.169584 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.169652 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.169675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.169704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.169727 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.272266 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.272332 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.272355 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.272384 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.272408 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.375413 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.375456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.375467 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.375484 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.375496 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.478044 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.478108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.478120 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.478137 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.478150 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.580509 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.580558 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.580570 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.580588 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.580599 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.683357 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.683404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.683413 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.683428 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.683437 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.786604 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.786647 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.786666 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.786683 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.786694 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.848875 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:52 crc kubenswrapper[4675]: E1121 13:33:52.849085 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.889956 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.890010 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.890025 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.890047 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.890086 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.993719 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.993804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.993828 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.993859 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4675]: I1121 13:33:52.993884 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.095906 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.095947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.095958 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.095972 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.095981 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.198526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.198601 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.198620 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.198646 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.198665 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.302042 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.302147 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.302172 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.302203 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.302227 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.405470 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.405505 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.405515 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.405531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.405540 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.508966 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.509002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.509018 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.509034 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.509043 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.611616 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.611697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.611712 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.611732 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.611743 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.714898 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.714953 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.714968 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.714986 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.714998 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.819060 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.819181 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.819204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.819233 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.819255 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.848873 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.848870 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.849023 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:53 crc kubenswrapper[4675]: E1121 13:33:53.849225 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:53 crc kubenswrapper[4675]: E1121 13:33:53.850648 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.850826 4675 scope.go:117] "RemoveContainer" containerID="5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb" Nov 21 13:33:53 crc kubenswrapper[4675]: E1121 13:33:53.850276 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.921633 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.921668 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.921679 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.921696 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4675]: I1121 13:33:53.921708 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.024807 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.024882 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.024905 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.024935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.024957 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.127588 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.127616 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.127630 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.127648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.127662 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.230586 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.230626 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.230638 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.230658 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.230670 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.333210 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.333235 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.333244 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.333259 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.333269 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.439819 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.439870 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.439878 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.439893 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.439902 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.460446 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/2.log" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.462811 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerStarted","Data":"6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1"} Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.463326 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.475740 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.487019 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.500154 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73609a4ba983edbb274ecf46c6f71f349a32f5aac48fc5b0c24a1acdd05e6674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887\\\\n2025-11-21T13:32:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887 to /host/opt/cni/bin/\\\\n2025-11-21T13:32:54Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:54Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.518641 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.533960 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.542375 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.542415 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.542429 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.542449 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.542463 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.555917 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.579366 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:25Z\\\",\\\"message\\\":\\\"Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1121 13:33:25.270440 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.596127 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.607442 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.621119 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.634616 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.644994 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.645043 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.645054 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.645116 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.645133 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.651392 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.662634 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.679654 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f630620-45f6-4ce2-b3bb-5ec3e6e758b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabe363fffb90ca65ad0cac2d28d56ec44596ac7f25e6eb9669a3ba9b6b61369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.702216 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.715030 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.725126 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.735506 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.746784 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.747237 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.747262 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.747271 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.747286 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.747297 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.848238 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:54 crc kubenswrapper[4675]: E1121 13:33:54.848609 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.849633 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.849669 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.849678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.849692 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.849701 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.880688 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.897333 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.912355 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.926052 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.937624 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.949771 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.951777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.951885 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.951988 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.952110 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.952204 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.961281 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f630620-45f6-4ce2-b3bb-5ec3e6e758b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabe363fffb90ca65ad0cac2d28d56ec44596ac7f25e6eb9669a3ba9b6b61369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.971826 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.983474 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73609a4ba983edbb274ecf46c6f71f349a32f5aac48fc5b0c24a1acdd05e6674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887\\\\n2025-11-21T13:32:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887 to /host/opt/cni/bin/\\\\n2025-11-21T13:32:54Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:54Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:54 crc kubenswrapper[4675]: I1121 13:33:54.997511 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.009976 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.022703 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.042179 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:25Z\\\",\\\"message\\\":\\\"Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1121 13:33:25.270440 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.054590 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.054646 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.054660 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.054677 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.054690 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.056964 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.070606 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.084505 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.104543 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.124281 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.141604 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.157190 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.157227 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.157239 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.157254 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.157265 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.260600 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.260660 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.260684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.260714 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.260737 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.369437 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.369502 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.369514 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.369530 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.369543 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.468267 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/3.log" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.468914 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/2.log" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.470922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.470953 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.470964 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.471012 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.471025 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.471371 4675 generic.go:334] "Generic (PLEG): container finished" podID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerID="6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1" exitCode=1 Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.471400 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerDied","Data":"6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1"} Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.471427 4675 scope.go:117] "RemoveContainer" containerID="5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.472373 4675 scope.go:117] "RemoveContainer" containerID="6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1" Nov 21 13:33:55 crc kubenswrapper[4675]: E1121 13:33:55.472658 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.496529 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.509056 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.554136 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73609a4ba983edbb274ecf46c6f71f349a32f5aac48fc5b0c24a1acdd05e6674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887\\\\n2025-11-21T13:32:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887 to /host/opt/cni/bin/\\\\n2025-11-21T13:32:54Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:54Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.569700 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.573105 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.573143 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.573155 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.573171 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.573184 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.580981 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.593142 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.605854 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.624491 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7880bf102ae733c03bc97977070df16548a432938df3a28de9b25c5f55f5bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:25Z\\\",\\\"message\\\":\\\"Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1121 13:33:25.270440 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:54Z\\\",\\\"message\\\":\\\"6 6804 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1121 13:33:54.784228 6804 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1121 13:33:54.784145 6804 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z]\\\\nI1121 13:33:54.784229 6804 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vc5gn\\\\nI1121 13:33:54.784228 6804 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 11.8µs\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.638509 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.648395 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.659906 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.670633 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.675843 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.675880 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.675888 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.675903 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.675945 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.685138 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.694912 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.703500 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f630620-45f6-4ce2-b3bb-5ec3e6e758b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabe363fffb90ca65ad0cac2d28d56ec44596ac7f25e6eb9669a3ba9b6b61369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.719999 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.732440 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.743256 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.753393 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.778133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.778179 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.778190 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.778209 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.778221 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.848163 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.848211 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.848179 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:55 crc kubenswrapper[4675]: E1121 13:33:55.848296 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:55 crc kubenswrapper[4675]: E1121 13:33:55.848405 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:55 crc kubenswrapper[4675]: E1121 13:33:55.848474 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.880956 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.881006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.881024 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.881044 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.881054 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.984680 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.984733 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.984744 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.984767 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4675]: I1121 13:33:55.984780 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.087047 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.087126 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.087149 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.087171 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.087186 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.189003 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.189389 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.189405 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.189424 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.189436 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.291860 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.291909 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.291927 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.291951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.291967 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.396848 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.396929 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.396953 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.396983 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.397005 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.477009 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/3.log" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.480542 4675 scope.go:117] "RemoveContainer" containerID="6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1" Nov 21 13:33:56 crc kubenswrapper[4675]: E1121 13:33:56.480741 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.495533 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.499211 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.499248 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.499260 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.499280 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.499298 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.507253 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.520631 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.532396 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.549537 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:54Z\\\",\\\"message\\\":\\\"6 6804 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1121 13:33:54.784228 6804 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1121 13:33:54.784145 6804 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z]\\\\nI1121 13:33:54.784229 6804 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vc5gn\\\\nI1121 13:33:54.784228 6804 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 11.8µs\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.562355 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.573181 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.583469 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.594725 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.601957 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.601986 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.601996 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.602013 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.602025 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.603852 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.611815 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.620388 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f630620-45f6-4ce2-b3bb-5ec3e6e758b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabe363fffb90ca65ad0cac2d28d56ec44596ac7f25e6eb9669a3ba9b6b61369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.645105 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.660140 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.673172 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.683535 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.697011 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.704764 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.704798 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.704807 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.704822 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.704830 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.708233 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73609a4ba983edbb274ecf46c6f71f349a32f5aac48fc5b0c24a1acdd05e6674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887\\\\n2025-11-21T13:32:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887 to /host/opt/cni/bin/\\\\n2025-11-21T13:32:54Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:54Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.717382 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.806781 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.806823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.806833 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.806852 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.806864 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.848509 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:56 crc kubenswrapper[4675]: E1121 13:33:56.848694 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.909659 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.909695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.909703 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.909719 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4675]: I1121 13:33:56.909730 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.013020 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.013121 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.013146 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.013177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.013202 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.115764 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.115810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.115823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.115840 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.115853 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.218785 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.218835 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.218847 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.218867 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.218879 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.321856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.321922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.321945 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.321974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.321998 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.424483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.424530 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.424543 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.424561 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.424573 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.527345 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.527394 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.527406 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.527430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.527442 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.630041 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.630134 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.630148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.630171 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.630186 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.732789 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.732870 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.732896 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.732925 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.732950 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.835169 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.835225 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.835243 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.835265 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.835283 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.848806 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.848829 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.848851 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:57 crc kubenswrapper[4675]: E1121 13:33:57.848952 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:57 crc kubenswrapper[4675]: E1121 13:33:57.849118 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:57 crc kubenswrapper[4675]: E1121 13:33:57.849209 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.937895 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.937934 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.937942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.937955 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4675]: I1121 13:33:57.937964 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.039997 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.040031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.040041 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.040054 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.040062 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.142044 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.142109 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.142122 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.142139 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.142151 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.248894 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.248951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.248975 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.248997 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.249011 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.352377 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.352449 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.352470 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.352499 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.352521 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.455317 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.455381 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.455399 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.455424 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.455443 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.558564 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.558616 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.558629 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.558645 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.558656 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.661167 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.661218 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.661230 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.661249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.661261 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.764479 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.764567 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.764581 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.764603 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.764619 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.848970 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:33:58 crc kubenswrapper[4675]: E1121 13:33:58.849203 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.867378 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.867432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.867445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.867463 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.867476 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.970774 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.970845 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.970862 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.970888 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4675]: I1121 13:33:58.970906 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.073819 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.073919 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.073945 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.073974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.073997 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.177437 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.177509 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.177533 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.177560 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.177583 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.281566 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.281626 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.281648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.281676 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.281697 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.384150 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.384225 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.384245 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.384275 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.384298 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.487325 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.487395 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.487420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.487455 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.487478 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.590916 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.590975 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.590992 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.591019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.591038 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.695145 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.695213 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.695228 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.695249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.695262 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.797996 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.798048 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.798059 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.798095 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.798108 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.840377 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.840451 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.840469 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.840499 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.840515 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.848773 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.848816 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.848902 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:59 crc kubenswrapper[4675]: E1121 13:33:59.848906 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:59 crc kubenswrapper[4675]: E1121 13:33:59.849025 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:59 crc kubenswrapper[4675]: E1121 13:33:59.849169 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:59 crc kubenswrapper[4675]: E1121 13:33:59.857498 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.862458 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.862511 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.862535 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.862566 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.862589 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4675]: E1121 13:33:59.878433 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.883595 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.883638 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.883651 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.883668 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.883681 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4675]: E1121 13:33:59.898388 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.902544 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.902600 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.902616 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.902640 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.902657 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4675]: E1121 13:33:59.915760 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.919818 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.919900 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.919923 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.919954 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.919976 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4675]: E1121 13:33:59.934892 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:59 crc kubenswrapper[4675]: E1121 13:33:59.935001 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.936370 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.936403 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.936415 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.936431 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4675]: I1121 13:33:59.936443 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.039400 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.039460 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.039477 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.039502 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.039519 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.142444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.142501 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.142519 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.142544 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.142563 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.245849 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.245940 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.245975 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.246006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.246027 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.349374 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.349732 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.349774 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.349830 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.349872 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.454803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.454865 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.454886 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.454915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.454938 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.557767 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.557837 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.557862 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.557892 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.557914 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.661286 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.661733 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.661755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.661782 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.661806 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.764688 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.764728 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.764744 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.764766 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.764783 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.848868 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:00 crc kubenswrapper[4675]: E1121 13:34:00.849156 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.866994 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.867211 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.867245 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.867277 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.867300 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.970632 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.970713 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.970737 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.970765 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4675]: I1121 13:34:00.970788 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.073446 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.073514 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.073537 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.073570 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.073595 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.176520 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.176561 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.176572 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.176588 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.176600 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.279142 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.279200 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.279217 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.279245 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.279266 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.382199 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.382266 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.382286 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.382310 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.382330 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.485329 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.485393 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.485417 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.485446 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.485466 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.588773 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.588879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.588918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.588951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.588977 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.692250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.692317 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.692332 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.692359 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.692378 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.794585 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.794631 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.794642 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.794659 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.794671 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.848228 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.848335 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.848270 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:01 crc kubenswrapper[4675]: E1121 13:34:01.848394 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:01 crc kubenswrapper[4675]: E1121 13:34:01.848497 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:01 crc kubenswrapper[4675]: E1121 13:34:01.848579 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.897537 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.897616 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.897638 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.897662 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4675]: I1121 13:34:01.897680 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:01.999971 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.000026 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.000045 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.000087 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.000102 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.103374 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.103456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.103496 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.103526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.103547 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.207629 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.207698 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.207718 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.207744 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.207763 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.310351 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.310433 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.310449 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.310467 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.310479 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.413138 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.413205 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.413222 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.413251 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.413275 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.515636 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.515716 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.515759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.515791 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.515812 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.618120 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.618178 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.618195 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.618218 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.618235 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.721354 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.721426 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.721452 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.721482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.721503 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.824062 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.824124 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.824135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.824152 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.824161 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.848968 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:02 crc kubenswrapper[4675]: E1121 13:34:02.849195 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.926634 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.926673 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.926681 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.926698 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4675]: I1121 13:34:02.926706 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.030313 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.030380 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.030405 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.030435 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.030455 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.134127 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.134196 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.134218 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.134251 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.134274 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.183334 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs\") pod \"network-metrics-daemon-djn7k\" (UID: \"3034a641-e8c3-4303-bb0e-1da29de3a41b\") " pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:03 crc kubenswrapper[4675]: E1121 13:34:03.183571 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:34:03 crc kubenswrapper[4675]: E1121 13:34:03.183671 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs podName:3034a641-e8c3-4303-bb0e-1da29de3a41b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:07.183634703 +0000 UTC m=+183.910049460 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs") pod "network-metrics-daemon-djn7k" (UID: "3034a641-e8c3-4303-bb0e-1da29de3a41b") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.237752 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.237832 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.237857 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.237893 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.237917 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.341059 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.341133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.341146 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.341164 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.341198 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.449437 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.449494 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.449511 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.449536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.449555 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.553187 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.553261 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.553283 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.553310 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.553332 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.656502 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.656609 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.656631 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.656660 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.656680 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.760197 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.760260 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.760279 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.760303 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.760322 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.848384 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.848549 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.848422 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:03 crc kubenswrapper[4675]: E1121 13:34:03.848800 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:03 crc kubenswrapper[4675]: E1121 13:34:03.848867 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:03 crc kubenswrapper[4675]: E1121 13:34:03.849005 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.863528 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.863587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.863604 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.863630 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.863647 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.966879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.966956 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.966982 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.967014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4675]: I1121 13:34:03.967037 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.069788 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.069856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.069873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.069898 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.069915 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.173562 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.173616 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.173634 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.173656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.173673 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.276259 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.276327 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.276367 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.276402 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.276425 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.379113 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.379205 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.379227 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.379255 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.379276 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.482022 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.482111 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.482133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.482151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.482164 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.584715 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.584749 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.584760 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.584777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.584790 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.687831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.687887 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.687903 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.687926 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.687943 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4675]: E1121 13:34:04.788742 4675 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.848141 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:04 crc kubenswrapper[4675]: E1121 13:34:04.848275 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.863367 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.876825 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.888994 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.900458 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.911859 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f630620-45f6-4ce2-b3bb-5ec3e6e758b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabe363fffb90ca65ad0cac2d28d56ec44596ac7f25e6eb9669a3ba9b6b61369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.934282 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.953199 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.965649 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.977007 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:04 crc kubenswrapper[4675]: I1121 13:34:04.989695 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:05 crc kubenswrapper[4675]: I1121 13:34:05.000746 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:04Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:05 crc kubenswrapper[4675]: I1121 13:34:05.011975 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73609a4ba983edbb274ecf46c6f71f349a32f5aac48fc5b0c24a1acdd05e6674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887\\\\n2025-11-21T13:32:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887 to /host/opt/cni/bin/\\\\n2025-11-21T13:32:54Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:54Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:05 crc kubenswrapper[4675]: I1121 13:34:05.021784 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:05 crc kubenswrapper[4675]: I1121 13:34:05.030709 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:05 crc kubenswrapper[4675]: I1121 13:34:05.042267 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:05 crc kubenswrapper[4675]: I1121 13:34:05.056767 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:05 crc kubenswrapper[4675]: I1121 13:34:05.076334 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:54Z\\\",\\\"message\\\":\\\"6 6804 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1121 13:33:54.784228 6804 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1121 13:33:54.784145 6804 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z]\\\\nI1121 13:33:54.784229 6804 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vc5gn\\\\nI1121 13:33:54.784228 6804 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 11.8µs\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:05 crc kubenswrapper[4675]: I1121 13:34:05.088775 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:05 crc kubenswrapper[4675]: I1121 13:34:05.096829 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:05Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:05 crc kubenswrapper[4675]: E1121 13:34:05.473251 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:34:05 crc kubenswrapper[4675]: I1121 13:34:05.847913 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:05 crc kubenswrapper[4675]: I1121 13:34:05.848026 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:05 crc kubenswrapper[4675]: I1121 13:34:05.848026 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:05 crc kubenswrapper[4675]: E1121 13:34:05.848248 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:05 crc kubenswrapper[4675]: E1121 13:34:05.848373 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:05 crc kubenswrapper[4675]: E1121 13:34:05.848470 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:06 crc kubenswrapper[4675]: I1121 13:34:06.848362 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:06 crc kubenswrapper[4675]: E1121 13:34:06.848560 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:07 crc kubenswrapper[4675]: I1121 13:34:07.848216 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:07 crc kubenswrapper[4675]: I1121 13:34:07.848349 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:07 crc kubenswrapper[4675]: E1121 13:34:07.848392 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:07 crc kubenswrapper[4675]: I1121 13:34:07.848460 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:07 crc kubenswrapper[4675]: E1121 13:34:07.848754 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:07 crc kubenswrapper[4675]: E1121 13:34:07.848787 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:08 crc kubenswrapper[4675]: I1121 13:34:08.848947 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:08 crc kubenswrapper[4675]: E1121 13:34:08.849260 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.847860 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.847926 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:09 crc kubenswrapper[4675]: E1121 13:34:09.848004 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.847901 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:09 crc kubenswrapper[4675]: E1121 13:34:09.848281 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:09 crc kubenswrapper[4675]: E1121 13:34:09.848476 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.941826 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.941879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.941894 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.941916 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.941933 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:09Z","lastTransitionTime":"2025-11-21T13:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:09 crc kubenswrapper[4675]: E1121 13:34:09.961122 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.964633 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.964708 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.964729 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.964759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.964782 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:09Z","lastTransitionTime":"2025-11-21T13:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:09 crc kubenswrapper[4675]: E1121 13:34:09.975943 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.979110 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.979158 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.979176 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.979198 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:09 crc kubenswrapper[4675]: I1121 13:34:09.979216 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:09Z","lastTransitionTime":"2025-11-21T13:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:09 crc kubenswrapper[4675]: E1121 13:34:09.998945 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:09Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:10 crc kubenswrapper[4675]: I1121 13:34:10.003740 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:10 crc kubenswrapper[4675]: I1121 13:34:10.003800 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:10 crc kubenswrapper[4675]: I1121 13:34:10.003812 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:10 crc kubenswrapper[4675]: I1121 13:34:10.003831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:10 crc kubenswrapper[4675]: I1121 13:34:10.003843 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:10Z","lastTransitionTime":"2025-11-21T13:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:10 crc kubenswrapper[4675]: E1121 13:34:10.018224 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:10 crc kubenswrapper[4675]: I1121 13:34:10.021865 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:10 crc kubenswrapper[4675]: I1121 13:34:10.021919 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:10 crc kubenswrapper[4675]: I1121 13:34:10.021931 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:10 crc kubenswrapper[4675]: I1121 13:34:10.021951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:10 crc kubenswrapper[4675]: I1121 13:34:10.021965 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:10Z","lastTransitionTime":"2025-11-21T13:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:10 crc kubenswrapper[4675]: E1121 13:34:10.037094 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:34:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7b9b045a-ab24-4730-a701-b9ff89571936\\\",\\\"systemUUID\\\":\\\"b6ce3dfe-f2a2-49b0-97be-2d30012dfcd8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:10Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:10 crc kubenswrapper[4675]: E1121 13:34:10.037236 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:34:10 crc kubenswrapper[4675]: E1121 13:34:10.475153 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:34:10 crc kubenswrapper[4675]: I1121 13:34:10.848989 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:10 crc kubenswrapper[4675]: E1121 13:34:10.849718 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:10 crc kubenswrapper[4675]: I1121 13:34:10.849849 4675 scope.go:117] "RemoveContainer" containerID="6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1" Nov 21 13:34:10 crc kubenswrapper[4675]: E1121 13:34:10.850002 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" Nov 21 13:34:11 crc kubenswrapper[4675]: I1121 13:34:11.848511 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:11 crc kubenswrapper[4675]: I1121 13:34:11.848561 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:11 crc kubenswrapper[4675]: E1121 13:34:11.848633 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:11 crc kubenswrapper[4675]: I1121 13:34:11.848525 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:11 crc kubenswrapper[4675]: E1121 13:34:11.848773 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:11 crc kubenswrapper[4675]: E1121 13:34:11.848878 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:12 crc kubenswrapper[4675]: I1121 13:34:12.848503 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:12 crc kubenswrapper[4675]: E1121 13:34:12.848691 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:13 crc kubenswrapper[4675]: I1121 13:34:13.848832 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:13 crc kubenswrapper[4675]: I1121 13:34:13.848874 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:13 crc kubenswrapper[4675]: I1121 13:34:13.848913 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:13 crc kubenswrapper[4675]: E1121 13:34:13.849058 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:13 crc kubenswrapper[4675]: E1121 13:34:13.849223 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:13 crc kubenswrapper[4675]: E1121 13:34:13.849381 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:14 crc kubenswrapper[4675]: I1121 13:34:14.848809 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:14 crc kubenswrapper[4675]: E1121 13:34:14.849034 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:14 crc kubenswrapper[4675]: I1121 13:34:14.863279 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21162055-1a92-4e7b-9717-ce6462331212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8ae297f0281f80ec8362b6e28001d47fd6ad23998b381bc06f6e7969b3236f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e013562474dfc729f3244d2190958ebbddf2502a70790b850705e25f27d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pftq9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rkqqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:14 crc kubenswrapper[4675]: I1121 13:34:14.881208 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be0dfad2-b69a-4a9d-8802-154944def28b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d1869eeb3b017f4737cd2d9f318e2fc270ffbe066fffe939b9e50d843622abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99bfaa13de2883a2c0adde25ca2ae6fab56bf956341dc1e58f43aa9d5a1b9387\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://223da5564c70aefce707aac4bafa489e52133bbdd916a50c0505899705494b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65e633d3e07ad2253da05ee5048ae06693a65ff7cd39be64d6112850bffcf97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:14 crc kubenswrapper[4675]: I1121 13:34:14.901131 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6db74e00-d40a-442b-b5b0-4d3b28e05178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d11767be565f4b3936936bd10a35ccbdaced6418a5f1b7b6913b34a39041433c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcmqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vnxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:14 crc kubenswrapper[4675]: I1121 13:34:14.918784 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hsw5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455c5b5a-917d-4361-bcc0-9283ffce0e86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73609a4ba983edbb274ecf46c6f71f349a32f5aac48fc5b0c24a1acdd05e6674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:39Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887\\\\n2025-11-21T13:32:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_094a65b9-f5c6-437e-8fa7-055bc4100887 to /host/opt/cni/bin/\\\\n2025-11-21T13:32:54Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:54Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r5wh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hsw5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:14 crc kubenswrapper[4675]: I1121 13:34:14.937278 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bj56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee0f125b-1d69-4a42-9d1e-14f3673a1cb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b4e305506fd30fe7aefd234a7372ce56855fb5489aece64e54a0fa2df0c87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70812e81cf0739108e3022ca9c8dd525bbc21159f4bc7b51690ecebbc10ba087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dd4094955af8b3160f6d1bbbd9187f4135352f2eabb56f6deaa63134543e6e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e88e9fb9ba6f94591adc8552f7686742a80614b0f41df4bd241e06a19022ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://391f14d0c9ee5aaee7c84ab05020e7f8a36bd5b3445c877367b0b678249562d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4c0230ec80f54638748c1fe854fb7d71836a5071f546c88bd2aa7b7a4ccd03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6035adb17455de0f744e0f0f068f4d5dabafc706b47c9678fec8740619e952c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:33:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7464\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bj56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:14 crc kubenswrapper[4675]: I1121 13:34:14.949670 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77cmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b683f87-49b9-47b9-bce6-62c5df20b364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a607cbf64c5966d8c3a1643c305e7fbc0803bdbec1ad5c437bc47a9178ea7f70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8hbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77cmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:14 crc kubenswrapper[4675]: I1121 13:34:14.962907 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-djn7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3034a641-e8c3-4303-bb0e-1da29de3a41b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sblr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-djn7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:14 crc kubenswrapper[4675]: I1121 13:34:14.980060 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c25d1896-683f-4337-a47a-3391e7d6dfe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb3d20b495460a9f224aff7addd5ba750f0c766d16c97df4e4562a81afd6daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa1522c657d5044476f05259dd7241026a1010ca7b8683179a031c020e894b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1952e00609bed8c2da0f2442d1e5f6cc71d6d0bfb91acda29e19570164e336c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4ed213144c2a0b55c40cd72e8f6155506120390939f5ea712037c14f82faa13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:14 crc kubenswrapper[4675]: I1121 13:34:14.996105 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1752538b21774d1b184154ac71aff9b2032dd2295329b26d3582c79477e644df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:15 crc kubenswrapper[4675]: I1121 13:34:15.018522 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:54Z\\\",\\\"message\\\":\\\"6 6804 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1121 13:33:54.784228 6804 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1121 13:33:54.784145 6804 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:54Z is after 2025-08-24T17:21:41Z]\\\\nI1121 13:33:54.784229 6804 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-vc5gn\\\\nI1121 13:33:54.784228 6804 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 11.8µs\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95hrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w28jn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:15 crc kubenswrapper[4675]: I1121 13:34:15.034146 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbdf681f227c73041fd22be22cdda4ddbc0d0448da6c42989c73b99c0eba4d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7a9eca2ba93f3c013a857d89bf02aa4ed7c311705194e15efac811b5aadb4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:15 crc kubenswrapper[4675]: I1121 13:34:15.048191 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:15 crc kubenswrapper[4675]: I1121 13:34:15.061263 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:15 crc kubenswrapper[4675]: I1121 13:34:15.077255 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:15 crc kubenswrapper[4675]: I1121 13:34:15.092870 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a16d93817bbfbfd8cd94a2dea80ddbbf8cf854d553db12918ad38b2afba3aaef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:15 crc kubenswrapper[4675]: I1121 13:34:15.107486 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vc5gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ef5406-1758-498a-b74d-66ffdad6f318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8664d6a9877629b05bd6c722ace2d9e739301a09fbbc4435f0937ff028c7f911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9wbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vc5gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:15 crc kubenswrapper[4675]: I1121 13:34:15.125296 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f630620-45f6-4ce2-b3bb-5ec3e6e758b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabe363fffb90ca65ad0cac2d28d56ec44596ac7f25e6eb9669a3ba9b6b61369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://467e86df80a2d684495967c866edcc18844937b133fcbe0bfd40cdb5c2accfdd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:15 crc kubenswrapper[4675]: I1121 13:34:15.146311 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2787d9c4-5efe-4045-8a57-813d500f76b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e02a2aec6301fafaefe0f1e7057b4a1669bf397c443edc8b463f31d159f00a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2b61431c942c4a34da5c8375df2e75e8c0dafa2c16e20088b6c35659bd5e539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aff611b2c9d32b905f1b53345f661b155135b9c8bb96f29d9c6d2bfec1b2da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5c0f5ccab54bb48184c03056e1002f0d3beec06ddb9f8375f8bc0202c7077fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ea77306d307593251ee6a7e0f09a6b13ac8e11a461a7ad6bdb24e4d948bafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://217bd244f5e5c6511a59c999636f50833ced202bcc144b223138f81a87435a6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0b757ce5859b8e6e401c86cd60d8b304be2c471de2974912fd5abebe722e6c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1969a2547f70b1d55e681dd95a280a1a071cbf26f71a6103017d0ff5ec690ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:15 crc kubenswrapper[4675]: I1121 13:34:15.164359 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee9e9a64-a961-4bfd-b60a-620dfc5aa96f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f40f4bcb6713839a7b87f53e39a0649e5ad97e025376d6cd401f57668768575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3820b53b4d0efb77d829d1ea522cb0689b2d438aed1a84de5521fac2d8fb114\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70cd91a6fa5f4d6aa542d039a340cf56ec54a891c02407271025d3cb2bfaa78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12872eda22f36703ec77d0b962010491a0d9318dffe2dbc8b0ae46b1744d814f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60cb32166c72887d2a0d32b8aa365f65ad2ea21ad3f0e2f5f230da3522c34200\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW1121 13:32:41.205996 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 13:32:41.206293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:41.207198 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1400666830/tls.crt::/tmp/serving-cert-1400666830/tls.key\\\\\\\"\\\\nI1121 13:32:41.702038 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:41.826606 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:41.826645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:41.826668 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:41.826673 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:41.830510 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:41.830551 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830558 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:41.830562 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:41.830565 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:41.830568 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:41.830570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:41.830616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:41.844048 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44873a74a320eed421ff2444fec3688df83348bd38352e3c19fdd46a0bd4147\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afc3fa4d645d9cdbb7bac4c85b763840d16f496bcff62aa7b4e22f42b96d75d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:34:15Z is after 2025-08-24T17:21:41Z" Nov 21 13:34:15 crc kubenswrapper[4675]: E1121 13:34:15.475726 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:34:15 crc kubenswrapper[4675]: I1121 13:34:15.848178 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:15 crc kubenswrapper[4675]: I1121 13:34:15.848264 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:15 crc kubenswrapper[4675]: E1121 13:34:15.848322 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:15 crc kubenswrapper[4675]: I1121 13:34:15.848339 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:15 crc kubenswrapper[4675]: E1121 13:34:15.848445 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:15 crc kubenswrapper[4675]: E1121 13:34:15.848603 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:16 crc kubenswrapper[4675]: I1121 13:34:16.848363 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:16 crc kubenswrapper[4675]: E1121 13:34:16.848540 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:17 crc kubenswrapper[4675]: I1121 13:34:17.848208 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:17 crc kubenswrapper[4675]: I1121 13:34:17.848253 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:17 crc kubenswrapper[4675]: I1121 13:34:17.848266 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:17 crc kubenswrapper[4675]: E1121 13:34:17.848619 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:17 crc kubenswrapper[4675]: E1121 13:34:17.848680 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:17 crc kubenswrapper[4675]: E1121 13:34:17.848738 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:18 crc kubenswrapper[4675]: I1121 13:34:18.848281 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:18 crc kubenswrapper[4675]: E1121 13:34:18.848485 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:19 crc kubenswrapper[4675]: I1121 13:34:19.849004 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:19 crc kubenswrapper[4675]: E1121 13:34:19.849187 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:19 crc kubenswrapper[4675]: I1121 13:34:19.849044 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:19 crc kubenswrapper[4675]: I1121 13:34:19.849019 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:19 crc kubenswrapper[4675]: E1121 13:34:19.849399 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:19 crc kubenswrapper[4675]: E1121 13:34:19.849381 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.142915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.142979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.142998 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.143023 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.143043 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:20Z","lastTransitionTime":"2025-11-21T13:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.216868 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth"] Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.217447 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.221128 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.221367 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.222459 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.222817 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.265024 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f332376-4922-410c-9dc7-33358827e87b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.265215 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1f332376-4922-410c-9dc7-33358827e87b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.265275 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f332376-4922-410c-9dc7-33358827e87b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.265515 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f332376-4922-410c-9dc7-33358827e87b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.265614 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1f332376-4922-410c-9dc7-33358827e87b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.267488 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vc5gn" podStartSLOduration=95.267463478 podStartE2EDuration="1m35.267463478s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:20.266997817 +0000 UTC m=+136.993412554" watchObservedRunningTime="2025-11-21 13:34:20.267463478 +0000 UTC m=+136.993878245" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.286660 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=40.286632088 podStartE2EDuration="40.286632088s" podCreationTimestamp="2025-11-21 13:33:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:20.285816808 +0000 UTC m=+137.012231585" watchObservedRunningTime="2025-11-21 13:34:20.286632088 +0000 UTC m=+137.013046855" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.346429 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=98.346409165 podStartE2EDuration="1m38.346409165s" podCreationTimestamp="2025-11-21 13:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:20.326044795 +0000 UTC m=+137.052459612" watchObservedRunningTime="2025-11-21 13:34:20.346409165 +0000 UTC m=+137.072823922" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.362970 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=98.362942549 podStartE2EDuration="1m38.362942549s" podCreationTimestamp="2025-11-21 13:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:20.346384795 +0000 UTC m=+137.072799562" watchObservedRunningTime="2025-11-21 13:34:20.362942549 +0000 UTC m=+137.089357306" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.366501 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f332376-4922-410c-9dc7-33358827e87b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.366584 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f332376-4922-410c-9dc7-33358827e87b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.366610 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1f332376-4922-410c-9dc7-33358827e87b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.366778 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f332376-4922-410c-9dc7-33358827e87b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.366708 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1f332376-4922-410c-9dc7-33358827e87b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.367439 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1f332376-4922-410c-9dc7-33358827e87b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.367499 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1f332376-4922-410c-9dc7-33358827e87b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.367860 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f332376-4922-410c-9dc7-33358827e87b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.375810 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f332376-4922-410c-9dc7-33358827e87b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.389502 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f332376-4922-410c-9dc7-33358827e87b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-njsth\" (UID: \"1f332376-4922-410c-9dc7-33358827e87b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.398988 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=84.398955801 podStartE2EDuration="1m24.398955801s" podCreationTimestamp="2025-11-21 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:20.398668104 +0000 UTC m=+137.125082851" watchObservedRunningTime="2025-11-21 13:34:20.398955801 +0000 UTC m=+137.125370798" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.409361 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podStartSLOduration=95.409342621 podStartE2EDuration="1m35.409342621s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:20.408812168 +0000 UTC m=+137.135226905" watchObservedRunningTime="2025-11-21 13:34:20.409342621 +0000 UTC m=+137.135757348" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.442759 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rkqqf" podStartSLOduration=95.442734997 podStartE2EDuration="1m35.442734997s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:20.44245593 +0000 UTC m=+137.168870677" watchObservedRunningTime="2025-11-21 13:34:20.442734997 +0000 UTC m=+137.169149744" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.443218 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hsw5h" podStartSLOduration=95.443207359 podStartE2EDuration="1m35.443207359s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:20.429196518 +0000 UTC m=+137.155611285" watchObservedRunningTime="2025-11-21 13:34:20.443207359 +0000 UTC m=+137.169622096" Nov 21 13:34:20 crc kubenswrapper[4675]: E1121 13:34:20.477153 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.484128 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=95.48410994299999 podStartE2EDuration="1m35.484109943s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:20.483652522 +0000 UTC m=+137.210067249" watchObservedRunningTime="2025-11-21 13:34:20.484109943 +0000 UTC m=+137.210524690" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.539881 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.546007 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-bj56b" podStartSLOduration=95.545969862 podStartE2EDuration="1m35.545969862s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:20.530962196 +0000 UTC m=+137.257376923" watchObservedRunningTime="2025-11-21 13:34:20.545969862 +0000 UTC m=+137.272384589" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.547502 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-77cmk" podStartSLOduration=95.54749404 podStartE2EDuration="1m35.54749404s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:20.546145036 +0000 UTC m=+137.272559773" watchObservedRunningTime="2025-11-21 13:34:20.54749404 +0000 UTC m=+137.273908767" Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.562591 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" event={"ID":"1f332376-4922-410c-9dc7-33358827e87b","Type":"ContainerStarted","Data":"911e9126a9e342db3ce59f7c773ca96cc3be225bf17413121892f4608c9fd1e3"} Nov 21 13:34:20 crc kubenswrapper[4675]: I1121 13:34:20.848473 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:20 crc kubenswrapper[4675]: E1121 13:34:20.848624 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:21 crc kubenswrapper[4675]: I1121 13:34:21.566922 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" event={"ID":"1f332376-4922-410c-9dc7-33358827e87b","Type":"ContainerStarted","Data":"d53f2ab95c799dbf8753cee8f5a45f48c0d9d38b34415edb4b61bc6681c6b632"} Nov 21 13:34:21 crc kubenswrapper[4675]: I1121 13:34:21.584920 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-njsth" podStartSLOduration=96.584897356 podStartE2EDuration="1m36.584897356s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:21.584481996 +0000 UTC m=+138.310896753" watchObservedRunningTime="2025-11-21 13:34:21.584897356 +0000 UTC m=+138.311312113" Nov 21 13:34:21 crc kubenswrapper[4675]: I1121 13:34:21.848950 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:21 crc kubenswrapper[4675]: E1121 13:34:21.849394 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:21 crc kubenswrapper[4675]: I1121 13:34:21.849100 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:21 crc kubenswrapper[4675]: E1121 13:34:21.849681 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:21 crc kubenswrapper[4675]: I1121 13:34:21.848979 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:21 crc kubenswrapper[4675]: E1121 13:34:21.849918 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:22 crc kubenswrapper[4675]: I1121 13:34:22.848780 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:22 crc kubenswrapper[4675]: E1121 13:34:22.849802 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:22 crc kubenswrapper[4675]: I1121 13:34:22.850409 4675 scope.go:117] "RemoveContainer" containerID="6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1" Nov 21 13:34:22 crc kubenswrapper[4675]: E1121 13:34:22.850700 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" Nov 21 13:34:23 crc kubenswrapper[4675]: I1121 13:34:23.848845 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:23 crc kubenswrapper[4675]: I1121 13:34:23.848851 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:23 crc kubenswrapper[4675]: E1121 13:34:23.849033 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:23 crc kubenswrapper[4675]: E1121 13:34:23.849099 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:23 crc kubenswrapper[4675]: I1121 13:34:23.848974 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:23 crc kubenswrapper[4675]: E1121 13:34:23.849245 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:24 crc kubenswrapper[4675]: I1121 13:34:24.851781 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:24 crc kubenswrapper[4675]: E1121 13:34:24.852040 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:25 crc kubenswrapper[4675]: E1121 13:34:25.477679 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:34:25 crc kubenswrapper[4675]: I1121 13:34:25.848356 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:25 crc kubenswrapper[4675]: I1121 13:34:25.848431 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:25 crc kubenswrapper[4675]: I1121 13:34:25.848356 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:25 crc kubenswrapper[4675]: E1121 13:34:25.848529 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:25 crc kubenswrapper[4675]: E1121 13:34:25.848613 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:25 crc kubenswrapper[4675]: E1121 13:34:25.848712 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:26 crc kubenswrapper[4675]: I1121 13:34:26.585239 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsw5h_455c5b5a-917d-4361-bcc0-9283ffce0e86/kube-multus/1.log" Nov 21 13:34:26 crc kubenswrapper[4675]: I1121 13:34:26.586537 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsw5h_455c5b5a-917d-4361-bcc0-9283ffce0e86/kube-multus/0.log" Nov 21 13:34:26 crc kubenswrapper[4675]: I1121 13:34:26.586654 4675 generic.go:334] "Generic (PLEG): container finished" podID="455c5b5a-917d-4361-bcc0-9283ffce0e86" containerID="73609a4ba983edbb274ecf46c6f71f349a32f5aac48fc5b0c24a1acdd05e6674" exitCode=1 Nov 21 13:34:26 crc kubenswrapper[4675]: I1121 13:34:26.586721 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsw5h" event={"ID":"455c5b5a-917d-4361-bcc0-9283ffce0e86","Type":"ContainerDied","Data":"73609a4ba983edbb274ecf46c6f71f349a32f5aac48fc5b0c24a1acdd05e6674"} Nov 21 13:34:26 crc kubenswrapper[4675]: I1121 13:34:26.586830 4675 scope.go:117] "RemoveContainer" containerID="27a9071b1ec91b9df30236ee57c702178f612ee78272da6b7962a67e6dfb04d9" Nov 21 13:34:26 crc kubenswrapper[4675]: I1121 13:34:26.587467 4675 scope.go:117] "RemoveContainer" containerID="73609a4ba983edbb274ecf46c6f71f349a32f5aac48fc5b0c24a1acdd05e6674" Nov 21 13:34:26 crc kubenswrapper[4675]: E1121 13:34:26.587826 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hsw5h_openshift-multus(455c5b5a-917d-4361-bcc0-9283ffce0e86)\"" pod="openshift-multus/multus-hsw5h" podUID="455c5b5a-917d-4361-bcc0-9283ffce0e86" Nov 21 13:34:26 crc kubenswrapper[4675]: I1121 13:34:26.848756 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:26 crc kubenswrapper[4675]: E1121 13:34:26.848921 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:27 crc kubenswrapper[4675]: I1121 13:34:27.591005 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsw5h_455c5b5a-917d-4361-bcc0-9283ffce0e86/kube-multus/1.log" Nov 21 13:34:27 crc kubenswrapper[4675]: I1121 13:34:27.847933 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:27 crc kubenswrapper[4675]: I1121 13:34:27.847995 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:27 crc kubenswrapper[4675]: I1121 13:34:27.847936 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:27 crc kubenswrapper[4675]: E1121 13:34:27.848161 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:27 crc kubenswrapper[4675]: E1121 13:34:27.848325 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:27 crc kubenswrapper[4675]: E1121 13:34:27.848439 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:28 crc kubenswrapper[4675]: I1121 13:34:28.848314 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:28 crc kubenswrapper[4675]: E1121 13:34:28.849035 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:29 crc kubenswrapper[4675]: I1121 13:34:29.848430 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:29 crc kubenswrapper[4675]: I1121 13:34:29.848710 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:29 crc kubenswrapper[4675]: I1121 13:34:29.848710 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:29 crc kubenswrapper[4675]: E1121 13:34:29.849180 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:29 crc kubenswrapper[4675]: E1121 13:34:29.849203 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:29 crc kubenswrapper[4675]: E1121 13:34:29.849194 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:30 crc kubenswrapper[4675]: E1121 13:34:30.479448 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:34:30 crc kubenswrapper[4675]: I1121 13:34:30.848804 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:30 crc kubenswrapper[4675]: E1121 13:34:30.849531 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:31 crc kubenswrapper[4675]: I1121 13:34:31.848706 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:31 crc kubenswrapper[4675]: E1121 13:34:31.848842 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:31 crc kubenswrapper[4675]: I1121 13:34:31.848733 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:31 crc kubenswrapper[4675]: E1121 13:34:31.848920 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:31 crc kubenswrapper[4675]: I1121 13:34:31.848916 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:31 crc kubenswrapper[4675]: E1121 13:34:31.849173 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:32 crc kubenswrapper[4675]: I1121 13:34:32.848669 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:32 crc kubenswrapper[4675]: E1121 13:34:32.848839 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:33 crc kubenswrapper[4675]: I1121 13:34:33.848028 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:33 crc kubenswrapper[4675]: I1121 13:34:33.848179 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:33 crc kubenswrapper[4675]: E1121 13:34:33.848272 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:33 crc kubenswrapper[4675]: I1121 13:34:33.848187 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:33 crc kubenswrapper[4675]: E1121 13:34:33.848375 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:33 crc kubenswrapper[4675]: E1121 13:34:33.848587 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:34 crc kubenswrapper[4675]: I1121 13:34:34.848615 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:34 crc kubenswrapper[4675]: E1121 13:34:34.849653 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:34 crc kubenswrapper[4675]: I1121 13:34:34.849813 4675 scope.go:117] "RemoveContainer" containerID="6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1" Nov 21 13:34:34 crc kubenswrapper[4675]: E1121 13:34:34.849965 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w28jn_openshift-ovn-kubernetes(5fd58cf4-de2e-4357-96eb-4fdb4694ea48)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" Nov 21 13:34:35 crc kubenswrapper[4675]: E1121 13:34:35.484198 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:34:35 crc kubenswrapper[4675]: I1121 13:34:35.848317 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:35 crc kubenswrapper[4675]: I1121 13:34:35.848371 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:35 crc kubenswrapper[4675]: I1121 13:34:35.848317 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:35 crc kubenswrapper[4675]: E1121 13:34:35.848589 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:35 crc kubenswrapper[4675]: E1121 13:34:35.848773 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:35 crc kubenswrapper[4675]: E1121 13:34:35.849028 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:36 crc kubenswrapper[4675]: I1121 13:34:36.848788 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:36 crc kubenswrapper[4675]: E1121 13:34:36.849005 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:37 crc kubenswrapper[4675]: I1121 13:34:37.848176 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:37 crc kubenswrapper[4675]: I1121 13:34:37.848255 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:37 crc kubenswrapper[4675]: I1121 13:34:37.848184 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:37 crc kubenswrapper[4675]: E1121 13:34:37.848409 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:37 crc kubenswrapper[4675]: E1121 13:34:37.848583 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:37 crc kubenswrapper[4675]: E1121 13:34:37.848781 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:38 crc kubenswrapper[4675]: I1121 13:34:38.848531 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:38 crc kubenswrapper[4675]: E1121 13:34:38.848734 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:39 crc kubenswrapper[4675]: I1121 13:34:39.848460 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:39 crc kubenswrapper[4675]: I1121 13:34:39.848568 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:39 crc kubenswrapper[4675]: E1121 13:34:39.848805 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:39 crc kubenswrapper[4675]: I1121 13:34:39.848838 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:39 crc kubenswrapper[4675]: E1121 13:34:39.848958 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:39 crc kubenswrapper[4675]: I1121 13:34:39.849123 4675 scope.go:117] "RemoveContainer" containerID="73609a4ba983edbb274ecf46c6f71f349a32f5aac48fc5b0c24a1acdd05e6674" Nov 21 13:34:39 crc kubenswrapper[4675]: E1121 13:34:39.849150 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:40 crc kubenswrapper[4675]: E1121 13:34:40.485966 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:34:40 crc kubenswrapper[4675]: I1121 13:34:40.631930 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsw5h_455c5b5a-917d-4361-bcc0-9283ffce0e86/kube-multus/1.log" Nov 21 13:34:40 crc kubenswrapper[4675]: I1121 13:34:40.631987 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsw5h" event={"ID":"455c5b5a-917d-4361-bcc0-9283ffce0e86","Type":"ContainerStarted","Data":"26aedf96f496e3744765f40ce0f4bd2ed20778645ad27ae30a179c9de8454a5f"} Nov 21 13:34:40 crc kubenswrapper[4675]: I1121 13:34:40.848427 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:40 crc kubenswrapper[4675]: E1121 13:34:40.848595 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:41 crc kubenswrapper[4675]: I1121 13:34:41.848310 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:41 crc kubenswrapper[4675]: I1121 13:34:41.848401 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:41 crc kubenswrapper[4675]: E1121 13:34:41.848541 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:41 crc kubenswrapper[4675]: I1121 13:34:41.848324 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:41 crc kubenswrapper[4675]: E1121 13:34:41.848678 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:41 crc kubenswrapper[4675]: E1121 13:34:41.848937 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:42 crc kubenswrapper[4675]: I1121 13:34:42.848832 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:42 crc kubenswrapper[4675]: E1121 13:34:42.849005 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:43 crc kubenswrapper[4675]: I1121 13:34:43.848356 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:43 crc kubenswrapper[4675]: I1121 13:34:43.848404 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:43 crc kubenswrapper[4675]: I1121 13:34:43.848486 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:43 crc kubenswrapper[4675]: E1121 13:34:43.848573 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:43 crc kubenswrapper[4675]: E1121 13:34:43.848708 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:43 crc kubenswrapper[4675]: E1121 13:34:43.848841 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:44 crc kubenswrapper[4675]: I1121 13:34:44.848586 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:44 crc kubenswrapper[4675]: E1121 13:34:44.851204 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:45 crc kubenswrapper[4675]: E1121 13:34:45.486648 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:34:45 crc kubenswrapper[4675]: I1121 13:34:45.847933 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:45 crc kubenswrapper[4675]: I1121 13:34:45.847954 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:45 crc kubenswrapper[4675]: I1121 13:34:45.847991 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:45 crc kubenswrapper[4675]: E1121 13:34:45.848405 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:45 crc kubenswrapper[4675]: E1121 13:34:45.848531 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:45 crc kubenswrapper[4675]: E1121 13:34:45.848604 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:46 crc kubenswrapper[4675]: I1121 13:34:46.136322 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:34:46 crc kubenswrapper[4675]: I1121 13:34:46.136425 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:34:46 crc kubenswrapper[4675]: I1121 13:34:46.848028 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:46 crc kubenswrapper[4675]: E1121 13:34:46.848258 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:47 crc kubenswrapper[4675]: I1121 13:34:47.848592 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:47 crc kubenswrapper[4675]: E1121 13:34:47.848750 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:47 crc kubenswrapper[4675]: I1121 13:34:47.848945 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:47 crc kubenswrapper[4675]: I1121 13:34:47.849234 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:47 crc kubenswrapper[4675]: E1121 13:34:47.849300 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:47 crc kubenswrapper[4675]: E1121 13:34:47.849455 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:47 crc kubenswrapper[4675]: I1121 13:34:47.849751 4675 scope.go:117] "RemoveContainer" containerID="6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1" Nov 21 13:34:48 crc kubenswrapper[4675]: I1121 13:34:48.664389 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/3.log" Nov 21 13:34:48 crc kubenswrapper[4675]: I1121 13:34:48.667845 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerStarted","Data":"c5f661d8d67aa7543bd09fe3d0b66402ebe6bac1a49d1de091718eed7ff1ace7"} Nov 21 13:34:48 crc kubenswrapper[4675]: I1121 13:34:48.668583 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:34:48 crc kubenswrapper[4675]: I1121 13:34:48.696920 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podStartSLOduration=123.696901007 podStartE2EDuration="2m3.696901007s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:48.696412285 +0000 UTC m=+165.422827022" watchObservedRunningTime="2025-11-21 13:34:48.696901007 +0000 UTC m=+165.423315744" Nov 21 13:34:48 crc kubenswrapper[4675]: I1121 13:34:48.848186 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:48 crc kubenswrapper[4675]: E1121 13:34:48.848370 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:48 crc kubenswrapper[4675]: I1121 13:34:48.873167 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-djn7k"] Nov 21 13:34:49 crc kubenswrapper[4675]: I1121 13:34:49.671708 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.672550 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:49 crc kubenswrapper[4675]: I1121 13:34:49.848456 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:49 crc kubenswrapper[4675]: I1121 13:34:49.848565 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.848661 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:49 crc kubenswrapper[4675]: I1121 13:34:49.848565 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.848728 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.848793 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:49 crc kubenswrapper[4675]: I1121 13:34:49.879232 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:49 crc kubenswrapper[4675]: I1121 13:34:49.879368 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:49 crc kubenswrapper[4675]: I1121 13:34:49.879419 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:49 crc kubenswrapper[4675]: I1121 13:34:49.879448 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:49 crc kubenswrapper[4675]: I1121 13:34:49.879484 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.879553 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.879566 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:36:51.879505638 +0000 UTC m=+288.605920405 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.879583 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.879614 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.879627 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.879644 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.879679 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.879651 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.879753 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.879638 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:36:51.879618851 +0000 UTC m=+288.606033668 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.879804 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:36:51.879792465 +0000 UTC m=+288.606207202 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.879820 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:36:51.879810605 +0000 UTC m=+288.606225342 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:34:49 crc kubenswrapper[4675]: E1121 13:34:49.879834 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:36:51.879827016 +0000 UTC m=+288.606241753 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:34:50 crc kubenswrapper[4675]: E1121 13:34:50.488986 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:34:51 crc kubenswrapper[4675]: I1121 13:34:51.848912 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:51 crc kubenswrapper[4675]: I1121 13:34:51.848944 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:51 crc kubenswrapper[4675]: I1121 13:34:51.848955 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:51 crc kubenswrapper[4675]: I1121 13:34:51.849055 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:51 crc kubenswrapper[4675]: E1121 13:34:51.849149 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:51 crc kubenswrapper[4675]: E1121 13:34:51.849480 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:51 crc kubenswrapper[4675]: E1121 13:34:51.849648 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:51 crc kubenswrapper[4675]: E1121 13:34:51.849797 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:53 crc kubenswrapper[4675]: I1121 13:34:53.848950 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:53 crc kubenswrapper[4675]: I1121 13:34:53.848950 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:53 crc kubenswrapper[4675]: I1121 13:34:53.849168 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:53 crc kubenswrapper[4675]: I1121 13:34:53.848980 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:53 crc kubenswrapper[4675]: E1121 13:34:53.849192 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:53 crc kubenswrapper[4675]: E1121 13:34:53.849430 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:53 crc kubenswrapper[4675]: E1121 13:34:53.849509 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-djn7k" podUID="3034a641-e8c3-4303-bb0e-1da29de3a41b" Nov 21 13:34:53 crc kubenswrapper[4675]: E1121 13:34:53.849586 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:55 crc kubenswrapper[4675]: I1121 13:34:55.848397 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:34:55 crc kubenswrapper[4675]: I1121 13:34:55.848471 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:55 crc kubenswrapper[4675]: I1121 13:34:55.848486 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:55 crc kubenswrapper[4675]: I1121 13:34:55.848419 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:55 crc kubenswrapper[4675]: I1121 13:34:55.851348 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 21 13:34:55 crc kubenswrapper[4675]: I1121 13:34:55.851369 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 21 13:34:55 crc kubenswrapper[4675]: I1121 13:34:55.851854 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 21 13:34:55 crc kubenswrapper[4675]: I1121 13:34:55.851865 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 21 13:34:55 crc kubenswrapper[4675]: I1121 13:34:55.851899 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 21 13:34:55 crc kubenswrapper[4675]: I1121 13:34:55.853284 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.864320 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.941975 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6r8cf"] Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.942461 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.946207 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gwz5c"] Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.946501 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5872"] Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.946713 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk"] Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.947022 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.947337 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.947537 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.949916 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.950038 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.950590 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.950706 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.950801 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.951227 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.951289 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.957721 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.957899 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.959572 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-cz299"] Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.959942 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.961224 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5872"] Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.965508 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.965702 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.965811 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.965930 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.966100 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.966214 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.965510 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.968408 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5kdbn"] Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.969012 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.971140 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.971290 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.971483 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.971603 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.971699 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.971835 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.971926 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.972031 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.972186 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.973722 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-ms8dm"] Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.974055 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ms8dm" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.974327 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.978384 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.978424 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.978490 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.978728 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.978901 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.978928 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.979294 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.983494 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.983548 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.983963 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.984534 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.984729 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.984932 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.985045 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.985311 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.985432 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.985561 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.985838 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx"] Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.986418 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.989243 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sg677"] Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.989707 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.992414 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.992819 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.993712 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tz4xg"] Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.994185 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.996595 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.996969 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.997041 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.997577 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.997624 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.997586 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.997745 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 21 13:35:00 crc kubenswrapper[4675]: I1121 13:35:00.997902 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.005611 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.005770 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.006321 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.020754 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6r8cf"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.020809 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.022553 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.022621 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.022733 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.022811 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.022837 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.024185 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.024183 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.024714 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.031012 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nd8c"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.031505 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.034351 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.034914 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.035135 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tcjb8"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.035551 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.036400 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.036710 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.038173 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rg5ls"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.038591 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.039978 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.048965 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.051055 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.051365 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.051486 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.051669 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.051817 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.078563 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.078878 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.053401 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.052096 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.053504 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.053506 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.053585 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.053668 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.053689 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.054026 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.055496 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.055547 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.055585 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.055651 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.055706 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.055794 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.056865 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.056965 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.057390 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.097902 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-smdhk"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.098270 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.099142 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-smdhk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.099892 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.100533 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.102159 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.102315 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.102458 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.102650 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.103674 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.104503 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.105021 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.106587 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.111513 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xs2g9"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.111944 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.112229 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.112415 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.112587 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.119204 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.124850 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.125824 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.125983 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8742d972-144c-43ee-8586-0901733ecb49-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-55wvx\" (UID: \"8742d972-144c-43ee-8586-0901733ecb49\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.126143 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb0a749f-6d30-4f56-aba4-10a5339044f7-etcd-client\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.127424 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8742d972-144c-43ee-8586-0901733ecb49-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-55wvx\" (UID: \"8742d972-144c-43ee-8586-0901733ecb49\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.128917 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-audit-dir\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.129137 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.129299 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-audit\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.129682 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c3091040-d378-4c3b-9f64-bf750e9b27f1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sg677\" (UID: \"c3091040-d378-4c3b-9f64-bf750e9b27f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.131897 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-client-ca\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.131547 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tz7pm"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.130913 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.132935 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.133111 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tz7pm" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.133400 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z8jjw"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.133513 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.134160 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-service-ca\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.134194 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwlmv\" (UniqueName: \"kubernetes.io/projected/d16b4be5-ea4f-4d90-b2be-3e9582858283-kube-api-access-lwlmv\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.134213 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa4ab0d2-e28e-4281-ba62-51165d7894e0-audit-dir\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.134245 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-serving-cert\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.134262 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.134277 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv89g\" (UniqueName: \"kubernetes.io/projected/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-kube-api-access-sv89g\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.134293 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa4ab0d2-e28e-4281-ba62-51165d7894e0-serving-cert\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.134308 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-trusted-ca-bundle\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.134326 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa4ab0d2-e28e-4281-ba62-51165d7894e0-audit-policies\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.134347 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.134352 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.134652 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z8jjw" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.134364 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb5a0d6b-3347-4d29-90a5-f554c65e5ddb-images\") pod \"machine-api-operator-5694c8668f-gwz5c\" (UID: \"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.134979 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.135057 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcffw\" (UniqueName: \"kubernetes.io/projected/eb5a0d6b-3347-4d29-90a5-f554c65e5ddb-kube-api-access-xcffw\") pod \"machine-api-operator-5694c8668f-gwz5c\" (UID: \"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.135250 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.135346 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvzqg\" (UniqueName: \"kubernetes.io/projected/cb0a749f-6d30-4f56-aba4-10a5339044f7-kube-api-access-nvzqg\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.135437 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-config\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.135536 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-config\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.135628 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cz299"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.135659 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcv9n\" (UniqueName: \"kubernetes.io/projected/0d4777cf-9799-450d-a46f-5d5bedeaa706-kube-api-access-xcv9n\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.135806 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmwsz\" (UniqueName: \"kubernetes.io/projected/c3091040-d378-4c3b-9f64-bf750e9b27f1-kube-api-access-rmwsz\") pod \"openshift-config-operator-7777fb866f-sg677\" (UID: \"c3091040-d378-4c3b-9f64-bf750e9b27f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.135890 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb0a749f-6d30-4f56-aba4-10a5339044f7-serving-cert\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136042 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb0a749f-6d30-4f56-aba4-10a5339044f7-audit-dir\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136101 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-oauth-serving-cert\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136130 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3091040-d378-4c3b-9f64-bf750e9b27f1-serving-cert\") pod \"openshift-config-operator-7777fb866f-sg677\" (UID: \"c3091040-d378-4c3b-9f64-bf750e9b27f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136155 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136208 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-oauth-config\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136234 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136287 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d16b4be5-ea4f-4d90-b2be-3e9582858283-serving-cert\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136314 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136337 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136365 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8742d972-144c-43ee-8586-0901733ecb49-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-55wvx\" (UID: \"8742d972-144c-43ee-8586-0901733ecb49\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136385 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j26gd\" (UniqueName: \"kubernetes.io/projected/8742d972-144c-43ee-8586-0901733ecb49-kube-api-access-j26gd\") pod \"cluster-image-registry-operator-dc59b4c8b-55wvx\" (UID: \"8742d972-144c-43ee-8586-0901733ecb49\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136406 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8fb0eb7-f6b1-4097-b167-c443441a28a6-trusted-ca\") pod \"console-operator-58897d9998-5kdbn\" (UID: \"a8fb0eb7-f6b1-4097-b167-c443441a28a6\") " pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136425 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-audit-policies\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136443 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa4ab0d2-e28e-4281-ba62-51165d7894e0-etcd-client\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136461 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fa4ab0d2-e28e-4281-ba62-51165d7894e0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136488 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv6mx\" (UniqueName: \"kubernetes.io/projected/fa4ab0d2-e28e-4281-ba62-51165d7894e0-kube-api-access-xv6mx\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136508 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-image-import-ca\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136530 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136549 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8fb0eb7-f6b1-4097-b167-c443441a28a6-serving-cert\") pod \"console-operator-58897d9998-5kdbn\" (UID: \"a8fb0eb7-f6b1-4097-b167-c443441a28a6\") " pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136571 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fvq9\" (UniqueName: \"kubernetes.io/projected/a8fb0eb7-f6b1-4097-b167-c443441a28a6-kube-api-access-4fvq9\") pod \"console-operator-58897d9998-5kdbn\" (UID: \"a8fb0eb7-f6b1-4097-b167-c443441a28a6\") " pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136592 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136615 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb5a0d6b-3347-4d29-90a5-f554c65e5ddb-config\") pod \"machine-api-operator-5694c8668f-gwz5c\" (UID: \"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136633 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb0a749f-6d30-4f56-aba4-10a5339044f7-encryption-config\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136653 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-etcd-serving-ca\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136681 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136699 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb0a749f-6d30-4f56-aba4-10a5339044f7-node-pullsecrets\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136721 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-config\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136739 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fa4ab0d2-e28e-4281-ba62-51165d7894e0-encryption-config\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136760 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfrxf\" (UniqueName: \"kubernetes.io/projected/6190d999-660a-44f5-a51b-cd53647289db-kube-api-access-nfrxf\") pod \"downloads-7954f5f757-ms8dm\" (UID: \"6190d999-660a-44f5-a51b-cd53647289db\") " pod="openshift-console/downloads-7954f5f757-ms8dm" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136784 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8fb0eb7-f6b1-4097-b167-c443441a28a6-config\") pod \"console-operator-58897d9998-5kdbn\" (UID: \"a8fb0eb7-f6b1-4097-b167-c443441a28a6\") " pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136807 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136830 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb5a0d6b-3347-4d29-90a5-f554c65e5ddb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gwz5c\" (UID: \"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.136856 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa4ab0d2-e28e-4281-ba62-51165d7894e0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.137152 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7wq8v"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.137715 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7wq8v" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.138162 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.141582 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.142259 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.142411 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.143019 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.143151 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.143264 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.143284 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.147413 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.148548 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.149727 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.153825 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.154247 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.158974 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r5svl"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.159931 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.160423 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.161484 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.164910 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.165190 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.165648 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.166272 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-v694d"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.167355 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.169058 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.169677 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.170305 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.170583 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.170816 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.175686 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.176424 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8mnf4"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.176910 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8mnf4" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.177677 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.178876 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.179391 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.180036 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5kdbn"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.182111 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9tglw"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.191041 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.191120 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.191828 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.196216 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rg5ls"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.197089 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.212508 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tz4xg"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.213527 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gwz5c"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.215459 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.216061 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sg677"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.217082 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tcjb8"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.220957 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z8jjw"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.221005 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.222086 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-smdhk"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.223215 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.225301 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.225663 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ms8dm"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.227023 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9tglw"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.228122 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.229321 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.230458 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.231746 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.232758 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.234012 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.235745 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tz7pm"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.236558 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7wq8v"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237258 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237294 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-trusted-ca-bundle\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237318 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa4ab0d2-e28e-4281-ba62-51165d7894e0-audit-policies\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237346 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-config\") pod \"route-controller-manager-6576b87f9c-kwl5c\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237373 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb5a0d6b-3347-4d29-90a5-f554c65e5ddb-images\") pod \"machine-api-operator-5694c8668f-gwz5c\" (UID: \"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237395 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcffw\" (UniqueName: \"kubernetes.io/projected/eb5a0d6b-3347-4d29-90a5-f554c65e5ddb-kube-api-access-xcffw\") pod \"machine-api-operator-5694c8668f-gwz5c\" (UID: \"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237419 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvzqg\" (UniqueName: \"kubernetes.io/projected/cb0a749f-6d30-4f56-aba4-10a5339044f7-kube-api-access-nvzqg\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237441 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237463 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-config\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237484 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-config\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237505 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcv9n\" (UniqueName: \"kubernetes.io/projected/0d4777cf-9799-450d-a46f-5d5bedeaa706-kube-api-access-xcv9n\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237543 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmwsz\" (UniqueName: \"kubernetes.io/projected/c3091040-d378-4c3b-9f64-bf750e9b27f1-kube-api-access-rmwsz\") pod \"openshift-config-operator-7777fb866f-sg677\" (UID: \"c3091040-d378-4c3b-9f64-bf750e9b27f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237567 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb0a749f-6d30-4f56-aba4-10a5339044f7-serving-cert\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237591 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9864d\" (UniqueName: \"kubernetes.io/projected/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-kube-api-access-9864d\") pod \"route-controller-manager-6576b87f9c-kwl5c\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237615 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3091040-d378-4c3b-9f64-bf750e9b27f1-serving-cert\") pod \"openshift-config-operator-7777fb866f-sg677\" (UID: \"c3091040-d378-4c3b-9f64-bf750e9b27f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237640 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb0a749f-6d30-4f56-aba4-10a5339044f7-audit-dir\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237663 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-oauth-serving-cert\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237684 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237728 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-oauth-config\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237763 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d16b4be5-ea4f-4d90-b2be-3e9582858283-serving-cert\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237784 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237824 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237849 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237871 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8fb0eb7-f6b1-4097-b167-c443441a28a6-trusted-ca\") pod \"console-operator-58897d9998-5kdbn\" (UID: \"a8fb0eb7-f6b1-4097-b167-c443441a28a6\") " pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237892 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8742d972-144c-43ee-8586-0901733ecb49-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-55wvx\" (UID: \"8742d972-144c-43ee-8586-0901733ecb49\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237913 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j26gd\" (UniqueName: \"kubernetes.io/projected/8742d972-144c-43ee-8586-0901733ecb49-kube-api-access-j26gd\") pod \"cluster-image-registry-operator-dc59b4c8b-55wvx\" (UID: \"8742d972-144c-43ee-8586-0901733ecb49\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237934 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-audit-policies\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.237983 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa4ab0d2-e28e-4281-ba62-51165d7894e0-etcd-client\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238022 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fa4ab0d2-e28e-4281-ba62-51165d7894e0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238082 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv6mx\" (UniqueName: \"kubernetes.io/projected/fa4ab0d2-e28e-4281-ba62-51165d7894e0-kube-api-access-xv6mx\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238105 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8fb0eb7-f6b1-4097-b167-c443441a28a6-serving-cert\") pod \"console-operator-58897d9998-5kdbn\" (UID: \"a8fb0eb7-f6b1-4097-b167-c443441a28a6\") " pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238156 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-image-import-ca\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238179 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-serving-cert\") pod \"route-controller-manager-6576b87f9c-kwl5c\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238198 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238238 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fvq9\" (UniqueName: \"kubernetes.io/projected/a8fb0eb7-f6b1-4097-b167-c443441a28a6-kube-api-access-4fvq9\") pod \"console-operator-58897d9998-5kdbn\" (UID: \"a8fb0eb7-f6b1-4097-b167-c443441a28a6\") " pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238255 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238271 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb5a0d6b-3347-4d29-90a5-f554c65e5ddb-config\") pod \"machine-api-operator-5694c8668f-gwz5c\" (UID: \"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238307 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb0a749f-6d30-4f56-aba4-10a5339044f7-encryption-config\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238327 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb0a749f-6d30-4f56-aba4-10a5339044f7-node-pullsecrets\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238342 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-etcd-serving-ca\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238358 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238394 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfrxf\" (UniqueName: \"kubernetes.io/projected/6190d999-660a-44f5-a51b-cd53647289db-kube-api-access-nfrxf\") pod \"downloads-7954f5f757-ms8dm\" (UID: \"6190d999-660a-44f5-a51b-cd53647289db\") " pod="openshift-console/downloads-7954f5f757-ms8dm" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238412 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-config\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238428 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fa4ab0d2-e28e-4281-ba62-51165d7894e0-encryption-config\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238618 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8fb0eb7-f6b1-4097-b167-c443441a28a6-config\") pod \"console-operator-58897d9998-5kdbn\" (UID: \"a8fb0eb7-f6b1-4097-b167-c443441a28a6\") " pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238642 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238689 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb5a0d6b-3347-4d29-90a5-f554c65e5ddb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gwz5c\" (UID: \"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238713 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa4ab0d2-e28e-4281-ba62-51165d7894e0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238733 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238778 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8742d972-144c-43ee-8586-0901733ecb49-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-55wvx\" (UID: \"8742d972-144c-43ee-8586-0901733ecb49\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238801 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb0a749f-6d30-4f56-aba4-10a5339044f7-etcd-client\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238844 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238873 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8742d972-144c-43ee-8586-0901733ecb49-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-55wvx\" (UID: \"8742d972-144c-43ee-8586-0901733ecb49\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238896 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-audit-dir\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238961 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c3091040-d378-4c3b-9f64-bf750e9b27f1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sg677\" (UID: \"c3091040-d378-4c3b-9f64-bf750e9b27f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.238983 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-audit\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.239037 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-client-ca\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.239059 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-service-ca\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.239111 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa4ab0d2-e28e-4281-ba62-51165d7894e0-audit-dir\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.239134 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-client-ca\") pod \"route-controller-manager-6576b87f9c-kwl5c\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.239206 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwlmv\" (UniqueName: \"kubernetes.io/projected/d16b4be5-ea4f-4d90-b2be-3e9582858283-kube-api-access-lwlmv\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.239225 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.239279 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nd8c"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.239243 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-serving-cert\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.239346 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa4ab0d2-e28e-4281-ba62-51165d7894e0-serving-cert\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.239375 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.239398 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv89g\" (UniqueName: \"kubernetes.io/projected/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-kube-api-access-sv89g\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.239901 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb0a749f-6d30-4f56-aba4-10a5339044f7-node-pullsecrets\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.240010 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.240200 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb0a749f-6d30-4f56-aba4-10a5339044f7-audit-dir\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.240685 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-etcd-serving-ca\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.240961 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-image-import-ca\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.240966 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb5a0d6b-3347-4d29-90a5-f554c65e5ddb-config\") pod \"machine-api-operator-5694c8668f-gwz5c\" (UID: \"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.241442 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa4ab0d2-e28e-4281-ba62-51165d7894e0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.241480 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-config\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.241867 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.243256 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-oauth-serving-cert\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.244303 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.244385 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8742d972-144c-43ee-8586-0901733ecb49-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-55wvx\" (UID: \"8742d972-144c-43ee-8586-0901733ecb49\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.244424 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8fb0eb7-f6b1-4097-b167-c443441a28a6-config\") pod \"console-operator-58897d9998-5kdbn\" (UID: \"a8fb0eb7-f6b1-4097-b167-c443441a28a6\") " pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.244572 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8fb0eb7-f6b1-4097-b167-c443441a28a6-trusted-ca\") pod \"console-operator-58897d9998-5kdbn\" (UID: \"a8fb0eb7-f6b1-4097-b167-c443441a28a6\") " pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.244910 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.245342 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.245449 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fa4ab0d2-e28e-4281-ba62-51165d7894e0-encryption-config\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.245650 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c3091040-d378-4c3b-9f64-bf750e9b27f1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sg677\" (UID: \"c3091040-d378-4c3b-9f64-bf750e9b27f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.245786 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.245899 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-trusted-ca-bundle\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.246025 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.246464 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-config\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.246514 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.246537 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fa4ab0d2-e28e-4281-ba62-51165d7894e0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.246625 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.246632 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa4ab0d2-e28e-4281-ba62-51165d7894e0-audit-policies\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.246680 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa4ab0d2-e28e-4281-ba62-51165d7894e0-audit-dir\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.246684 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb5a0d6b-3347-4d29-90a5-f554c65e5ddb-images\") pod \"machine-api-operator-5694c8668f-gwz5c\" (UID: \"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.246699 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.246956 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3091040-d378-4c3b-9f64-bf750e9b27f1-serving-cert\") pod \"openshift-config-operator-7777fb866f-sg677\" (UID: \"c3091040-d378-4c3b-9f64-bf750e9b27f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.247175 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-audit\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.247392 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-service-ca\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.247538 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-audit-policies\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.247594 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.247634 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-audit-dir\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.247891 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.247949 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-config\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.248175 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0a749f-6d30-4f56-aba4-10a5339044f7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.248375 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-client-ca\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.248889 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r5svl"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.248951 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8fb0eb7-f6b1-4097-b167-c443441a28a6-serving-cert\") pod \"console-operator-58897d9998-5kdbn\" (UID: \"a8fb0eb7-f6b1-4097-b167-c443441a28a6\") " pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.249197 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.249516 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb5a0d6b-3347-4d29-90a5-f554c65e5ddb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gwz5c\" (UID: \"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.249596 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-serving-cert\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.250089 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8742d972-144c-43ee-8586-0901733ecb49-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-55wvx\" (UID: \"8742d972-144c-43ee-8586-0901733ecb49\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.250104 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb0a749f-6d30-4f56-aba4-10a5339044f7-serving-cert\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.250130 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d16b4be5-ea4f-4d90-b2be-3e9582858283-serving-cert\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.250232 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.250441 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-oauth-config\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.250772 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb0a749f-6d30-4f56-aba4-10a5339044f7-etcd-client\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.251250 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb0a749f-6d30-4f56-aba4-10a5339044f7-encryption-config\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.251510 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.251550 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-g5t4j"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.252406 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-g5t4j" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.253098 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.253227 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa4ab0d2-e28e-4281-ba62-51165d7894e0-serving-cert\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.254503 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8mnf4"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.255584 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-v694d"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.256584 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-tkjfz"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.257359 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tkjfz" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.257584 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.258683 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.259664 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.260962 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-gh9gt"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.261572 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gh9gt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.262006 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-g5t4j"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.263027 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tkjfz"] Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.264462 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa4ab0d2-e28e-4281-ba62-51165d7894e0-etcd-client\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.264486 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.285178 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.304233 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.331078 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.341286 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-client-ca\") pod \"route-controller-manager-6576b87f9c-kwl5c\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.341412 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-config\") pod \"route-controller-manager-6576b87f9c-kwl5c\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.341483 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9864d\" (UniqueName: \"kubernetes.io/projected/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-kube-api-access-9864d\") pod \"route-controller-manager-6576b87f9c-kwl5c\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.341599 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-serving-cert\") pod \"route-controller-manager-6576b87f9c-kwl5c\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.343058 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-config\") pod \"route-controller-manager-6576b87f9c-kwl5c\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.345983 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.348688 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-client-ca\") pod \"route-controller-manager-6576b87f9c-kwl5c\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.350660 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-serving-cert\") pod \"route-controller-manager-6576b87f9c-kwl5c\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.364432 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.384622 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.404583 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.444991 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.465193 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.484036 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.503924 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.524785 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.544686 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.565026 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.585726 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.605307 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.626017 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.645893 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.665719 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.686201 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.705578 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.725114 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.766421 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.785918 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.805600 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.826001 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.845839 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.866163 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.885141 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.905496 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.924725 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.945864 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.965616 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 21 13:35:01 crc kubenswrapper[4675]: I1121 13:35:01.985853 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.004243 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.025061 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.044861 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.065188 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.085707 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.105625 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.126037 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.145352 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.163367 4675 request.go:700] Waited for 1.001351603s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0 Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.165340 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.185427 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.205493 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.237544 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.246255 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.265039 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.285119 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.305202 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.325595 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.344918 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.364967 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.386124 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.406130 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.425619 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.445316 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.466550 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.485061 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.506048 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.525745 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.544956 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.566242 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.586157 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.605601 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.624941 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.645234 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.664889 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.684954 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.705536 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.726565 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.745236 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.764957 4675 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.785830 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.804947 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.853253 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv89g\" (UniqueName: \"kubernetes.io/projected/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-kube-api-access-sv89g\") pod \"oauth-openshift-558db77b4-tz4xg\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.872350 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fvq9\" (UniqueName: \"kubernetes.io/projected/a8fb0eb7-f6b1-4097-b167-c443441a28a6-kube-api-access-4fvq9\") pod \"console-operator-58897d9998-5kdbn\" (UID: \"a8fb0eb7-f6b1-4097-b167-c443441a28a6\") " pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.896636 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfrxf\" (UniqueName: \"kubernetes.io/projected/6190d999-660a-44f5-a51b-cd53647289db-kube-api-access-nfrxf\") pod \"downloads-7954f5f757-ms8dm\" (UID: \"6190d999-660a-44f5-a51b-cd53647289db\") " pod="openshift-console/downloads-7954f5f757-ms8dm" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.902830 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.908609 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ms8dm" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.909372 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j26gd\" (UniqueName: \"kubernetes.io/projected/8742d972-144c-43ee-8586-0901733ecb49-kube-api-access-j26gd\") pod \"cluster-image-registry-operator-dc59b4c8b-55wvx\" (UID: \"8742d972-144c-43ee-8586-0901733ecb49\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.927757 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv6mx\" (UniqueName: \"kubernetes.io/projected/fa4ab0d2-e28e-4281-ba62-51165d7894e0-kube-api-access-xv6mx\") pod \"apiserver-7bbb656c7d-vvrpk\" (UID: \"fa4ab0d2-e28e-4281-ba62-51165d7894e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.938405 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.944521 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcv9n\" (UniqueName: \"kubernetes.io/projected/0d4777cf-9799-450d-a46f-5d5bedeaa706-kube-api-access-xcv9n\") pod \"console-f9d7485db-cz299\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.960955 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8742d972-144c-43ee-8586-0901733ecb49-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-55wvx\" (UID: \"8742d972-144c-43ee-8586-0901733ecb49\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" Nov 21 13:35:02 crc kubenswrapper[4675]: I1121 13:35:02.982802 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmwsz\" (UniqueName: \"kubernetes.io/projected/c3091040-d378-4c3b-9f64-bf750e9b27f1-kube-api-access-rmwsz\") pod \"openshift-config-operator-7777fb866f-sg677\" (UID: \"c3091040-d378-4c3b-9f64-bf750e9b27f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.001391 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvzqg\" (UniqueName: \"kubernetes.io/projected/cb0a749f-6d30-4f56-aba4-10a5339044f7-kube-api-access-nvzqg\") pod \"apiserver-76f77b778f-6r8cf\" (UID: \"cb0a749f-6d30-4f56-aba4-10a5339044f7\") " pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.022579 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcffw\" (UniqueName: \"kubernetes.io/projected/eb5a0d6b-3347-4d29-90a5-f554c65e5ddb-kube-api-access-xcffw\") pod \"machine-api-operator-5694c8668f-gwz5c\" (UID: \"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.044937 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwlmv\" (UniqueName: \"kubernetes.io/projected/d16b4be5-ea4f-4d90-b2be-3e9582858283-kube-api-access-lwlmv\") pod \"controller-manager-879f6c89f-k5872\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.045393 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.065729 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.075587 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.087996 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.102191 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.105307 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.125796 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.126904 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.140364 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.146250 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.152380 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.165823 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.178535 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tz4xg"] Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.183751 4675 request.go:700] Waited for 1.922012964s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.185761 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 21 13:35:03 crc kubenswrapper[4675]: W1121 13:35:03.191760 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8acbfba2_82d1_4e2a_bd77_7f35f84d35eb.slice/crio-0f02772ed2f2e60a8ba8b1f22d376c1927d836a120fa52026fe0547f2e9ab315 WatchSource:0}: Error finding container 0f02772ed2f2e60a8ba8b1f22d376c1927d836a120fa52026fe0547f2e9ab315: Status 404 returned error can't find the container with id 0f02772ed2f2e60a8ba8b1f22d376c1927d836a120fa52026fe0547f2e9ab315 Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.204514 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.221466 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.225590 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.228509 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.261005 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9864d\" (UniqueName: \"kubernetes.io/projected/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-kube-api-access-9864d\") pod \"route-controller-manager-6576b87f9c-kwl5c\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.276215 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.333747 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ms8dm"] Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.346564 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5kdbn"] Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.359956 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gwz5c"] Nov 21 13:35:03 crc kubenswrapper[4675]: W1121 13:35:03.366106 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8fb0eb7_f6b1_4097_b167_c443441a28a6.slice/crio-af5a4540a93ad5084a2a483da7c3658a2324c5c36b78258cb92b8e1ed70f6d49 WatchSource:0}: Error finding container af5a4540a93ad5084a2a483da7c3658a2324c5c36b78258cb92b8e1ed70f6d49: Status 404 returned error can't find the container with id af5a4540a93ad5084a2a483da7c3658a2324c5c36b78258cb92b8e1ed70f6d49 Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.369446 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d459863-1177-489f-94af-1cc9c352e509-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s44fv\" (UID: \"9d459863-1177-489f-94af-1cc9c352e509\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.369519 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3bc6824-00ea-42b0-99f7-ca0145d2e630-metrics-tls\") pod \"dns-operator-744455d44c-smdhk\" (UID: \"d3bc6824-00ea-42b0-99f7-ca0145d2e630\") " pod="openshift-dns-operator/dns-operator-744455d44c-smdhk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.369547 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e19a2b0-8360-4276-a5e6-d543e9941234-serving-cert\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.369616 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-serving-cert\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.369691 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkn72\" (UID: \"2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.369728 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da9a4b5e-bce2-48d5-9aec-e681063b19de-service-ca-bundle\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.369776 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-etcd-client\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.369812 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/872d6e82-4322-4b06-a8e1-c3f23aea4c45-registry-certificates\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.369859 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-config\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.369946 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4fwq\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-kube-api-access-z4fwq\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.369973 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da9a4b5e-bce2-48d5-9aec-e681063b19de-metrics-certs\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.370005 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcvdl\" (UniqueName: \"kubernetes.io/projected/ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1-kube-api-access-vcvdl\") pod \"openshift-apiserver-operator-796bbdcf4f-mdn79\" (UID: \"ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.370195 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/da9a4b5e-bce2-48d5-9aec-e681063b19de-default-certificate\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.370234 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76m7q\" (UniqueName: \"kubernetes.io/projected/d8ba03e2-eef6-44ce-908e-2606777e9fe4-kube-api-access-76m7q\") pod \"machine-approver-56656f9798-k6r5g\" (UID: \"d8ba03e2-eef6-44ce-908e-2606777e9fe4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.370303 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e19a2b0-8360-4276-a5e6-d543e9941234-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.370901 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-etcd-service-ca\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.370948 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8ba03e2-eef6-44ce-908e-2606777e9fe4-auth-proxy-config\") pod \"machine-approver-56656f9798-k6r5g\" (UID: \"d8ba03e2-eef6-44ce-908e-2606777e9fe4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.370968 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bb3c26c-805d-4b41-b66b-b2b7b87583de-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7p5h\" (UID: \"2bb3c26c-805d-4b41-b66b-b2b7b87583de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.371160 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mdn79\" (UID: \"ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.371502 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472-trusted-ca\") pod \"ingress-operator-5b745b69d9-mm7d4\" (UID: \"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.371547 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ba03e2-eef6-44ce-908e-2606777e9fe4-config\") pod \"machine-approver-56656f9798-k6r5g\" (UID: \"d8ba03e2-eef6-44ce-908e-2606777e9fe4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.371564 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppdft\" (UniqueName: \"kubernetes.io/projected/d3bc6824-00ea-42b0-99f7-ca0145d2e630-kube-api-access-ppdft\") pod \"dns-operator-744455d44c-smdhk\" (UID: \"d3bc6824-00ea-42b0-99f7-ca0145d2e630\") " pod="openshift-dns-operator/dns-operator-744455d44c-smdhk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.371600 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/872d6e82-4322-4b06-a8e1-c3f23aea4c45-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.371617 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhvcv\" (UniqueName: \"kubernetes.io/projected/da9a4b5e-bce2-48d5-9aec-e681063b19de-kube-api-access-rhvcv\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.371633 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8ba03e2-eef6-44ce-908e-2606777e9fe4-machine-approver-tls\") pod \"machine-approver-56656f9798-k6r5g\" (UID: \"d8ba03e2-eef6-44ce-908e-2606777e9fe4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.371675 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bb3c26c-805d-4b41-b66b-b2b7b87583de-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7p5h\" (UID: \"2bb3c26c-805d-4b41-b66b-b2b7b87583de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.371702 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: E1121 13:35:03.371932 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:03.871920703 +0000 UTC m=+180.598335430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372018 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/872d6e82-4322-4b06-a8e1-c3f23aea4c45-trusted-ca\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372050 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mdn79\" (UID: \"ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372374 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-registry-tls\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372397 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-bound-sa-token\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372422 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkqrc\" (UniqueName: \"kubernetes.io/projected/3e19a2b0-8360-4276-a5e6-d543e9941234-kube-api-access-gkqrc\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372461 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d459863-1177-489f-94af-1cc9c352e509-config\") pod \"kube-apiserver-operator-766d6c64bb-s44fv\" (UID: \"9d459863-1177-489f-94af-1cc9c352e509\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372490 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/872d6e82-4322-4b06-a8e1-c3f23aea4c45-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372506 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/da9a4b5e-bce2-48d5-9aec-e681063b19de-stats-auth\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372530 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkn72\" (UID: \"2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372546 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mm7d4\" (UID: \"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372563 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bb3c26c-805d-4b41-b66b-b2b7b87583de-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7p5h\" (UID: \"2bb3c26c-805d-4b41-b66b-b2b7b87583de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372601 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-etcd-ca\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372624 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjqfl\" (UniqueName: \"kubernetes.io/projected/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-kube-api-access-tjqfl\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372663 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d459863-1177-489f-94af-1cc9c352e509-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s44fv\" (UID: \"9d459863-1177-489f-94af-1cc9c352e509\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372680 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e19a2b0-8360-4276-a5e6-d543e9941234-service-ca-bundle\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372696 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472-metrics-tls\") pod \"ingress-operator-5b745b69d9-mm7d4\" (UID: \"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372744 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nffdb\" (UniqueName: \"kubernetes.io/projected/2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d-kube-api-access-nffdb\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkn72\" (UID: \"2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372762 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e19a2b0-8360-4276-a5e6-d543e9941234-config\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.372806 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bslv\" (UniqueName: \"kubernetes.io/projected/d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472-kube-api-access-8bslv\") pod \"ingress-operator-5b745b69d9-mm7d4\" (UID: \"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" Nov 21 13:35:03 crc kubenswrapper[4675]: W1121 13:35:03.374923 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb5a0d6b_3347_4d29_90a5_f554c65e5ddb.slice/crio-493d540623c00356392c151e7d0bb656c53ba2996c513fd7189d282e0cb6ed64 WatchSource:0}: Error finding container 493d540623c00356392c151e7d0bb656c53ba2996c513fd7189d282e0cb6ed64: Status 404 returned error can't find the container with id 493d540623c00356392c151e7d0bb656c53ba2996c513fd7189d282e0cb6ed64 Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475256 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475415 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50aa3e37-a2f0-418a-ae62-70b46bd19ed2-config-volume\") pod \"dns-default-tkjfz\" (UID: \"50aa3e37-a2f0-418a-ae62-70b46bd19ed2\") " pod="openshift-dns/dns-default-tkjfz" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475436 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/73a0c3be-9995-4143-8a8a-13d00ff3e702-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rhnsm\" (UID: \"73a0c3be-9995-4143-8a8a-13d00ff3e702\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475458 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/872d6e82-4322-4b06-a8e1-c3f23aea4c45-trusted-ca\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475477 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-registry-tls\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475493 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkqrc\" (UniqueName: \"kubernetes.io/projected/3e19a2b0-8360-4276-a5e6-d543e9941234-kube-api-access-gkqrc\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475508 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d459863-1177-489f-94af-1cc9c352e509-config\") pod \"kube-apiserver-operator-766d6c64bb-s44fv\" (UID: \"9d459863-1177-489f-94af-1cc9c352e509\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475523 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/feee8664-38cf-4761-b00f-7b9551d7916a-apiservice-cert\") pod \"packageserver-d55dfcdfc-7wrmm\" (UID: \"feee8664-38cf-4761-b00f-7b9551d7916a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475539 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/561c5bff-b0cf-4d0a-8035-7898ce300a38-cert\") pod \"ingress-canary-g5t4j\" (UID: \"561c5bff-b0cf-4d0a-8035-7898ce300a38\") " pod="openshift-ingress-canary/ingress-canary-g5t4j" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475555 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f-signing-key\") pod \"service-ca-9c57cc56f-8mnf4\" (UID: \"56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mnf4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475571 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bb3c26c-805d-4b41-b66b-b2b7b87583de-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7p5h\" (UID: \"2bb3c26c-805d-4b41-b66b-b2b7b87583de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475586 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0a531689-8a1f-425d-a966-d4cbb209b38a-srv-cert\") pod \"catalog-operator-68c6474976-klhkk\" (UID: \"0a531689-8a1f-425d-a966-d4cbb209b38a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475604 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472-metrics-tls\") pod \"ingress-operator-5b745b69d9-mm7d4\" (UID: \"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475620 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js2k5\" (UniqueName: \"kubernetes.io/projected/a8863d93-a7c1-450c-8712-45c98a7facc6-kube-api-access-js2k5\") pod \"kube-storage-version-migrator-operator-b67b599dd-vxgr2\" (UID: \"a8863d93-a7c1-450c-8712-45c98a7facc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475640 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nffdb\" (UniqueName: \"kubernetes.io/projected/2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d-kube-api-access-nffdb\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkn72\" (UID: \"2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475657 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e4afd2ae-2106-47f9-b516-a2987c0d359d-node-bootstrap-token\") pod \"machine-config-server-gh9gt\" (UID: \"e4afd2ae-2106-47f9-b516-a2987c0d359d\") " pod="openshift-machine-config-operator/machine-config-server-gh9gt" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475672 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bslv\" (UniqueName: \"kubernetes.io/projected/d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472-kube-api-access-8bslv\") pod \"ingress-operator-5b745b69d9-mm7d4\" (UID: \"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475689 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-registration-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475704 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d459863-1177-489f-94af-1cc9c352e509-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s44fv\" (UID: \"9d459863-1177-489f-94af-1cc9c352e509\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv" Nov 21 13:35:03 crc kubenswrapper[4675]: E1121 13:35:03.475757 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:03.975741433 +0000 UTC m=+180.702156160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.475788 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8863d93-a7c1-450c-8712-45c98a7facc6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vxgr2\" (UID: \"a8863d93-a7c1-450c-8712-45c98a7facc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.476408 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0a531689-8a1f-425d-a966-d4cbb209b38a-profile-collector-cert\") pod \"catalog-operator-68c6474976-klhkk\" (UID: \"0a531689-8a1f-425d-a966-d4cbb209b38a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.476540 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3bc6824-00ea-42b0-99f7-ca0145d2e630-metrics-tls\") pod \"dns-operator-744455d44c-smdhk\" (UID: \"d3bc6824-00ea-42b0-99f7-ca0145d2e630\") " pod="openshift-dns-operator/dns-operator-744455d44c-smdhk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.476585 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8863d93-a7c1-450c-8712-45c98a7facc6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vxgr2\" (UID: \"a8863d93-a7c1-450c-8712-45c98a7facc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.476708 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e19a2b0-8360-4276-a5e6-d543e9941234-serving-cert\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.476763 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/90a5318c-96de-40ae-a8f4-87241ab72f28-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7wq8v\" (UID: \"90a5318c-96de-40ae-a8f4-87241ab72f28\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7wq8v" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.476787 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f8f39e1-45f2-4870-b7c6-03edd55ebdc1-serving-cert\") pod \"service-ca-operator-777779d784-z8pfh\" (UID: \"0f8f39e1-45f2-4870-b7c6-03edd55ebdc1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.476807 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/872d6e82-4322-4b06-a8e1-c3f23aea4c45-registry-certificates\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.476823 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d86vz\" (UniqueName: \"kubernetes.io/projected/73a0c3be-9995-4143-8a8a-13d00ff3e702-kube-api-access-d86vz\") pod \"cluster-samples-operator-665b6dd947-rhnsm\" (UID: \"73a0c3be-9995-4143-8a8a-13d00ff3e702\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.476841 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4fwq\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-kube-api-access-z4fwq\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.476857 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da9a4b5e-bce2-48d5-9aec-e681063b19de-metrics-certs\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.476872 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e4afd2ae-2106-47f9-b516-a2987c0d359d-certs\") pod \"machine-config-server-gh9gt\" (UID: \"e4afd2ae-2106-47f9-b516-a2987c0d359d\") " pod="openshift-machine-config-operator/machine-config-server-gh9gt" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.476887 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-csi-data-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.476925 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d459863-1177-489f-94af-1cc9c352e509-config\") pod \"kube-apiserver-operator-766d6c64bb-s44fv\" (UID: \"9d459863-1177-489f-94af-1cc9c352e509\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.476930 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxc6j\" (UniqueName: \"kubernetes.io/projected/e053a129-b32f-4092-831a-db4052fad241-kube-api-access-mxc6j\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477012 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/da9a4b5e-bce2-48d5-9aec-e681063b19de-default-certificate\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477250 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76m7q\" (UniqueName: \"kubernetes.io/projected/d8ba03e2-eef6-44ce-908e-2606777e9fe4-kube-api-access-76m7q\") pod \"machine-approver-56656f9798-k6r5g\" (UID: \"d8ba03e2-eef6-44ce-908e-2606777e9fe4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477475 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e19a2b0-8360-4276-a5e6-d543e9941234-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477521 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7v27\" (UniqueName: \"kubernetes.io/projected/50aa3e37-a2f0-418a-ae62-70b46bd19ed2-kube-api-access-r7v27\") pod \"dns-default-tkjfz\" (UID: \"50aa3e37-a2f0-418a-ae62-70b46bd19ed2\") " pod="openshift-dns/dns-default-tkjfz" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477547 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/07333151-cefa-4d07-aeb0-e88764760bfc-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hmpjz\" (UID: \"07333151-cefa-4d07-aeb0-e88764760bfc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477571 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v656\" (UniqueName: \"kubernetes.io/projected/e4afd2ae-2106-47f9-b516-a2987c0d359d-kube-api-access-6v656\") pod \"machine-config-server-gh9gt\" (UID: \"e4afd2ae-2106-47f9-b516-a2987c0d359d\") " pod="openshift-machine-config-operator/machine-config-server-gh9gt" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477591 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/50aa3e37-a2f0-418a-ae62-70b46bd19ed2-metrics-tls\") pod \"dns-default-tkjfz\" (UID: \"50aa3e37-a2f0-418a-ae62-70b46bd19ed2\") " pod="openshift-dns/dns-default-tkjfz" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477635 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/feee8664-38cf-4761-b00f-7b9551d7916a-webhook-cert\") pod \"packageserver-d55dfcdfc-7wrmm\" (UID: \"feee8664-38cf-4761-b00f-7b9551d7916a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477656 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a962f3e-9813-4fe0-81cc-86faebfc6446-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8gbn9\" (UID: \"0a962f3e-9813-4fe0-81cc-86faebfc6446\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477685 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472-trusted-ca\") pod \"ingress-operator-5b745b69d9-mm7d4\" (UID: \"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477723 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39e32a09-8172-443c-bd56-00a536a06de2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r5svl\" (UID: \"39e32a09-8172-443c-bd56-00a536a06de2\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477749 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwh2f\" (UniqueName: \"kubernetes.io/projected/90a5318c-96de-40ae-a8f4-87241ab72f28-kube-api-access-xwh2f\") pod \"control-plane-machine-set-operator-78cbb6b69f-7wq8v\" (UID: \"90a5318c-96de-40ae-a8f4-87241ab72f28\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7wq8v" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477772 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1-srv-cert\") pod \"olm-operator-6b444d44fb-wv7l2\" (UID: \"4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477803 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ba03e2-eef6-44ce-908e-2606777e9fe4-config\") pod \"machine-approver-56656f9798-k6r5g\" (UID: \"d8ba03e2-eef6-44ce-908e-2606777e9fe4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477824 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/531294e8-2e33-4f1f-848a-b2d19d8e6102-secret-volume\") pod \"collect-profiles-29395530-gwk6l\" (UID: \"531294e8-2e33-4f1f-848a-b2d19d8e6102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477848 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbxhf\" (UniqueName: \"kubernetes.io/projected/0a531689-8a1f-425d-a966-d4cbb209b38a-kube-api-access-cbxhf\") pod \"catalog-operator-68c6474976-klhkk\" (UID: \"0a531689-8a1f-425d-a966-d4cbb209b38a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477876 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a962f3e-9813-4fe0-81cc-86faebfc6446-proxy-tls\") pod \"machine-config-operator-74547568cd-8gbn9\" (UID: \"0a962f3e-9813-4fe0-81cc-86faebfc6446\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477900 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhvcv\" (UniqueName: \"kubernetes.io/projected/da9a4b5e-bce2-48d5-9aec-e681063b19de-kube-api-access-rhvcv\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477923 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8ba03e2-eef6-44ce-908e-2606777e9fe4-machine-approver-tls\") pod \"machine-approver-56656f9798-k6r5g\" (UID: \"d8ba03e2-eef6-44ce-908e-2606777e9fe4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477945 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b565103-d77f-4ab0-b1cb-82d0912b4984-config\") pod \"kube-controller-manager-operator-78b949d7b-h8wcg\" (UID: \"8b565103-d77f-4ab0-b1cb-82d0912b4984\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477969 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/99ff329a-9fc3-4e73-9c02-abad7af09113-proxy-tls\") pod \"machine-config-controller-84d6567774-v694d\" (UID: \"99ff329a-9fc3-4e73-9c02-abad7af09113\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.477996 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/872d6e82-4322-4b06-a8e1-c3f23aea4c45-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478018 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vntkd\" (UniqueName: \"kubernetes.io/projected/531294e8-2e33-4f1f-848a-b2d19d8e6102-kube-api-access-vntkd\") pod \"collect-profiles-29395530-gwk6l\" (UID: \"531294e8-2e33-4f1f-848a-b2d19d8e6102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478041 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5lpb\" (UniqueName: \"kubernetes.io/projected/0f8f39e1-45f2-4870-b7c6-03edd55ebdc1-kube-api-access-v5lpb\") pod \"service-ca-operator-777779d784-z8pfh\" (UID: \"0f8f39e1-45f2-4870-b7c6-03edd55ebdc1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478085 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bb3c26c-805d-4b41-b66b-b2b7b87583de-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7p5h\" (UID: \"2bb3c26c-805d-4b41-b66b-b2b7b87583de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478109 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9mfv\" (UniqueName: \"kubernetes.io/projected/39e32a09-8172-443c-bd56-00a536a06de2-kube-api-access-v9mfv\") pod \"marketplace-operator-79b997595-r5svl\" (UID: \"39e32a09-8172-443c-bd56-00a536a06de2\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478131 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c789\" (UniqueName: \"kubernetes.io/projected/feee8664-38cf-4761-b00f-7b9551d7916a-kube-api-access-9c789\") pod \"packageserver-d55dfcdfc-7wrmm\" (UID: \"feee8664-38cf-4761-b00f-7b9551d7916a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478152 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvz2c\" (UniqueName: \"kubernetes.io/projected/07897227-3975-4a8e-88c7-af39b89133af-kube-api-access-kvz2c\") pod \"multus-admission-controller-857f4d67dd-z8jjw\" (UID: \"07897227-3975-4a8e-88c7-af39b89133af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z8jjw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478182 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/872d6e82-4322-4b06-a8e1-c3f23aea4c45-trusted-ca\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478186 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478236 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mdn79\" (UID: \"ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478261 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-plugins-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478292 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-bound-sa-token\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478314 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98hw4\" (UniqueName: \"kubernetes.io/projected/88a7ce6c-1fed-43f5-82c6-44b4fac52dad-kube-api-access-98hw4\") pod \"migrator-59844c95c7-tz7pm\" (UID: \"88a7ce6c-1fed-43f5-82c6-44b4fac52dad\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tz7pm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478335 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f8f39e1-45f2-4870-b7c6-03edd55ebdc1-config\") pod \"service-ca-operator-777779d784-z8pfh\" (UID: \"0f8f39e1-45f2-4870-b7c6-03edd55ebdc1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478358 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sxzs\" (UniqueName: \"kubernetes.io/projected/56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f-kube-api-access-6sxzs\") pod \"service-ca-9c57cc56f-8mnf4\" (UID: \"56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mnf4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478383 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/da9a4b5e-bce2-48d5-9aec-e681063b19de-stats-auth\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478404 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-mountpoint-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478440 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/872d6e82-4322-4b06-a8e1-c3f23aea4c45-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478467 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkn72\" (UID: \"2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478488 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mm7d4\" (UID: \"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478511 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbmvq\" (UniqueName: \"kubernetes.io/projected/4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1-kube-api-access-zbmvq\") pod \"olm-operator-6b444d44fb-wv7l2\" (UID: \"4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478550 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/872d6e82-4322-4b06-a8e1-c3f23aea4c45-registry-certificates\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478692 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjqfl\" (UniqueName: \"kubernetes.io/projected/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-kube-api-access-tjqfl\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478720 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07897227-3975-4a8e-88c7-af39b89133af-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z8jjw\" (UID: \"07897227-3975-4a8e-88c7-af39b89133af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z8jjw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478746 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-etcd-ca\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478773 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d459863-1177-489f-94af-1cc9c352e509-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s44fv\" (UID: \"9d459863-1177-489f-94af-1cc9c352e509\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478793 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e19a2b0-8360-4276-a5e6-d543e9941234-service-ca-bundle\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478820 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e19a2b0-8360-4276-a5e6-d543e9941234-config\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478843 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/feee8664-38cf-4761-b00f-7b9551d7916a-tmpfs\") pod \"packageserver-d55dfcdfc-7wrmm\" (UID: \"feee8664-38cf-4761-b00f-7b9551d7916a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.478879 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blstb\" (UniqueName: \"kubernetes.io/projected/0a962f3e-9813-4fe0-81cc-86faebfc6446-kube-api-access-blstb\") pod \"machine-config-operator-74547568cd-8gbn9\" (UID: \"0a962f3e-9813-4fe0-81cc-86faebfc6446\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" Nov 21 13:35:03 crc kubenswrapper[4675]: E1121 13:35:03.478935 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:03.978916672 +0000 UTC m=+180.705331399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.480538 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e19a2b0-8360-4276-a5e6-d543e9941234-serving-cert\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.481109 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ba03e2-eef6-44ce-908e-2606777e9fe4-config\") pod \"machine-approver-56656f9798-k6r5g\" (UID: \"d8ba03e2-eef6-44ce-908e-2606777e9fe4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.481305 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mdn79\" (UID: \"ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.482735 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/da9a4b5e-bce2-48d5-9aec-e681063b19de-default-certificate\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.482837 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e19a2b0-8360-4276-a5e6-d543e9941234-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.482865 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bb3c26c-805d-4b41-b66b-b2b7b87583de-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7p5h\" (UID: \"2bb3c26c-805d-4b41-b66b-b2b7b87583de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.483200 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/872d6e82-4322-4b06-a8e1-c3f23aea4c45-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.483264 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3bc6824-00ea-42b0-99f7-ca0145d2e630-metrics-tls\") pod \"dns-operator-744455d44c-smdhk\" (UID: \"d3bc6824-00ea-42b0-99f7-ca0145d2e630\") " pod="openshift-dns-operator/dns-operator-744455d44c-smdhk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.483483 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-registry-tls\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.483862 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkn72\" (UID: \"2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.483876 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlzpk\" (UniqueName: \"kubernetes.io/projected/07333151-cefa-4d07-aeb0-e88764760bfc-kube-api-access-tlzpk\") pod \"package-server-manager-789f6589d5-hmpjz\" (UID: \"07333151-cefa-4d07-aeb0-e88764760bfc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.483920 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5dkw\" (UniqueName: \"kubernetes.io/projected/99ff329a-9fc3-4e73-9c02-abad7af09113-kube-api-access-n5dkw\") pod \"machine-config-controller-84d6567774-v694d\" (UID: \"99ff329a-9fc3-4e73-9c02-abad7af09113\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.483949 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-serving-cert\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.484425 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e19a2b0-8360-4276-a5e6-d543e9941234-service-ca-bundle\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.485538 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da9a4b5e-bce2-48d5-9aec-e681063b19de-metrics-certs\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.485652 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e19a2b0-8360-4276-a5e6-d543e9941234-config\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.486188 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-etcd-ca\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.486607 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-serving-cert\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.486939 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/da9a4b5e-bce2-48d5-9aec-e681063b19de-stats-auth\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.487383 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472-trusted-ca\") pod \"ingress-operator-5b745b69d9-mm7d4\" (UID: \"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.487877 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472-metrics-tls\") pod \"ingress-operator-5b745b69d9-mm7d4\" (UID: \"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.488241 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d459863-1177-489f-94af-1cc9c352e509-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s44fv\" (UID: \"9d459863-1177-489f-94af-1cc9c352e509\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.489306 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkn72\" (UID: \"2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.489808 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b565103-d77f-4ab0-b1cb-82d0912b4984-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h8wcg\" (UID: \"8b565103-d77f-4ab0-b1cb-82d0912b4984\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.489842 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da9a4b5e-bce2-48d5-9aec-e681063b19de-service-ca-bundle\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.489879 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-etcd-client\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.489893 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkn72\" (UID: \"2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.490498 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da9a4b5e-bce2-48d5-9aec-e681063b19de-service-ca-bundle\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.490528 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-config\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.490559 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0a962f3e-9813-4fe0-81cc-86faebfc6446-images\") pod \"machine-config-operator-74547568cd-8gbn9\" (UID: \"0a962f3e-9813-4fe0-81cc-86faebfc6446\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.490577 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99ff329a-9fc3-4e73-9c02-abad7af09113-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-v694d\" (UID: \"99ff329a-9fc3-4e73-9c02-abad7af09113\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.490783 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcvdl\" (UniqueName: \"kubernetes.io/projected/ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1-kube-api-access-vcvdl\") pod \"openshift-apiserver-operator-796bbdcf4f-mdn79\" (UID: \"ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.490837 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlc85\" (UniqueName: \"kubernetes.io/projected/561c5bff-b0cf-4d0a-8035-7898ce300a38-kube-api-access-jlc85\") pod \"ingress-canary-g5t4j\" (UID: \"561c5bff-b0cf-4d0a-8035-7898ce300a38\") " pod="openshift-ingress-canary/ingress-canary-g5t4j" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.490874 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39e32a09-8172-443c-bd56-00a536a06de2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r5svl\" (UID: \"39e32a09-8172-443c-bd56-00a536a06de2\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.490941 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-config\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.490949 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-etcd-service-ca\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.490975 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8ba03e2-eef6-44ce-908e-2606777e9fe4-auth-proxy-config\") pod \"machine-approver-56656f9798-k6r5g\" (UID: \"d8ba03e2-eef6-44ce-908e-2606777e9fe4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.491336 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bb3c26c-805d-4b41-b66b-b2b7b87583de-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7p5h\" (UID: \"2bb3c26c-805d-4b41-b66b-b2b7b87583de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.491356 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/872d6e82-4322-4b06-a8e1-c3f23aea4c45-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.491501 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-socket-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.491506 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-etcd-service-ca\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.491528 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mdn79\" (UID: \"ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.491544 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/531294e8-2e33-4f1f-848a-b2d19d8e6102-config-volume\") pod \"collect-profiles-29395530-gwk6l\" (UID: \"531294e8-2e33-4f1f-848a-b2d19d8e6102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.491576 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wv7l2\" (UID: \"4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.491597 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppdft\" (UniqueName: \"kubernetes.io/projected/d3bc6824-00ea-42b0-99f7-ca0145d2e630-kube-api-access-ppdft\") pod \"dns-operator-744455d44c-smdhk\" (UID: \"d3bc6824-00ea-42b0-99f7-ca0145d2e630\") " pod="openshift-dns-operator/dns-operator-744455d44c-smdhk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.491613 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f-signing-cabundle\") pod \"service-ca-9c57cc56f-8mnf4\" (UID: \"56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mnf4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.491631 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b565103-d77f-4ab0-b1cb-82d0912b4984-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h8wcg\" (UID: \"8b565103-d77f-4ab0-b1cb-82d0912b4984\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.491764 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8ba03e2-eef6-44ce-908e-2606777e9fe4-auth-proxy-config\") pod \"machine-approver-56656f9798-k6r5g\" (UID: \"d8ba03e2-eef6-44ce-908e-2606777e9fe4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.492094 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bb3c26c-805d-4b41-b66b-b2b7b87583de-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7p5h\" (UID: \"2bb3c26c-805d-4b41-b66b-b2b7b87583de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.492185 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mdn79\" (UID: \"ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.493891 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8ba03e2-eef6-44ce-908e-2606777e9fe4-machine-approver-tls\") pod \"machine-approver-56656f9798-k6r5g\" (UID: \"d8ba03e2-eef6-44ce-908e-2606777e9fe4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.507368 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk"] Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.507499 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-etcd-client\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: W1121 13:35:03.530203 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3091040_d378_4c3b_9f64_bf750e9b27f1.slice/crio-475ed8d76fad3c8376df22fee1ebf14e1312453dcfaba5e87543c0f029848938 WatchSource:0}: Error finding container 475ed8d76fad3c8376df22fee1ebf14e1312453dcfaba5e87543c0f029848938: Status 404 returned error can't find the container with id 475ed8d76fad3c8376df22fee1ebf14e1312453dcfaba5e87543c0f029848938 Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.532620 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sg677"] Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.533887 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkqrc\" (UniqueName: \"kubernetes.io/projected/3e19a2b0-8360-4276-a5e6-d543e9941234-kube-api-access-gkqrc\") pod \"authentication-operator-69f744f599-tcjb8\" (UID: \"3e19a2b0-8360-4276-a5e6-d543e9941234\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: W1121 13:35:03.534479 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa4ab0d2_e28e_4281_ba62_51165d7894e0.slice/crio-79888d101de96b7adb765146ac482a14c2fa810c67f3a0a931ac1cf048b8104e WatchSource:0}: Error finding container 79888d101de96b7adb765146ac482a14c2fa810c67f3a0a931ac1cf048b8104e: Status 404 returned error can't find the container with id 79888d101de96b7adb765146ac482a14c2fa810c67f3a0a931ac1cf048b8104e Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.537612 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d459863-1177-489f-94af-1cc9c352e509-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s44fv\" (UID: \"9d459863-1177-489f-94af-1cc9c352e509\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.537614 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c"] Nov 21 13:35:03 crc kubenswrapper[4675]: W1121 13:35:03.546139 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9e36cf0_9784_40cc_bbe5_19cfbc5e8295.slice/crio-eae68c881ac28908c738a3bd0b5ff106e1622c9b4df98958cd72fb39e2ebc11a WatchSource:0}: Error finding container eae68c881ac28908c738a3bd0b5ff106e1622c9b4df98958cd72fb39e2ebc11a: Status 404 returned error can't find the container with id eae68c881ac28908c738a3bd0b5ff106e1622c9b4df98958cd72fb39e2ebc11a Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.558606 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6r8cf"] Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.566522 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nffdb\" (UniqueName: \"kubernetes.io/projected/2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d-kube-api-access-nffdb\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkn72\" (UID: \"2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.579616 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bslv\" (UniqueName: \"kubernetes.io/projected/d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472-kube-api-access-8bslv\") pod \"ingress-operator-5b745b69d9-mm7d4\" (UID: \"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.584030 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" Nov 21 13:35:03 crc kubenswrapper[4675]: W1121 13:35:03.584089 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb0a749f_6d30_4f56_aba4_10a5339044f7.slice/crio-f41e1379a8a60d0e95eb45e64dc25a6c1a55f93507544669e3b7710d2380a5ac WatchSource:0}: Error finding container f41e1379a8a60d0e95eb45e64dc25a6c1a55f93507544669e3b7710d2380a5ac: Status 404 returned error can't find the container with id f41e1379a8a60d0e95eb45e64dc25a6c1a55f93507544669e3b7710d2380a5ac Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.592551 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.592806 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f-signing-cabundle\") pod \"service-ca-9c57cc56f-8mnf4\" (UID: \"56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mnf4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.592845 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b565103-d77f-4ab0-b1cb-82d0912b4984-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h8wcg\" (UID: \"8b565103-d77f-4ab0-b1cb-82d0912b4984\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.592867 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50aa3e37-a2f0-418a-ae62-70b46bd19ed2-config-volume\") pod \"dns-default-tkjfz\" (UID: \"50aa3e37-a2f0-418a-ae62-70b46bd19ed2\") " pod="openshift-dns/dns-default-tkjfz" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.592889 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/73a0c3be-9995-4143-8a8a-13d00ff3e702-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rhnsm\" (UID: \"73a0c3be-9995-4143-8a8a-13d00ff3e702\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.592916 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/feee8664-38cf-4761-b00f-7b9551d7916a-apiservice-cert\") pod \"packageserver-d55dfcdfc-7wrmm\" (UID: \"feee8664-38cf-4761-b00f-7b9551d7916a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.592936 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/561c5bff-b0cf-4d0a-8035-7898ce300a38-cert\") pod \"ingress-canary-g5t4j\" (UID: \"561c5bff-b0cf-4d0a-8035-7898ce300a38\") " pod="openshift-ingress-canary/ingress-canary-g5t4j" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.592957 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f-signing-key\") pod \"service-ca-9c57cc56f-8mnf4\" (UID: \"56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mnf4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.592979 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0a531689-8a1f-425d-a966-d4cbb209b38a-srv-cert\") pod \"catalog-operator-68c6474976-klhkk\" (UID: \"0a531689-8a1f-425d-a966-d4cbb209b38a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593002 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js2k5\" (UniqueName: \"kubernetes.io/projected/a8863d93-a7c1-450c-8712-45c98a7facc6-kube-api-access-js2k5\") pod \"kube-storage-version-migrator-operator-b67b599dd-vxgr2\" (UID: \"a8863d93-a7c1-450c-8712-45c98a7facc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593023 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e4afd2ae-2106-47f9-b516-a2987c0d359d-node-bootstrap-token\") pod \"machine-config-server-gh9gt\" (UID: \"e4afd2ae-2106-47f9-b516-a2987c0d359d\") " pod="openshift-machine-config-operator/machine-config-server-gh9gt" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593047 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-registration-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593087 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8863d93-a7c1-450c-8712-45c98a7facc6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vxgr2\" (UID: \"a8863d93-a7c1-450c-8712-45c98a7facc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593109 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0a531689-8a1f-425d-a966-d4cbb209b38a-profile-collector-cert\") pod \"catalog-operator-68c6474976-klhkk\" (UID: \"0a531689-8a1f-425d-a966-d4cbb209b38a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593131 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8863d93-a7c1-450c-8712-45c98a7facc6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vxgr2\" (UID: \"a8863d93-a7c1-450c-8712-45c98a7facc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593173 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/90a5318c-96de-40ae-a8f4-87241ab72f28-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7wq8v\" (UID: \"90a5318c-96de-40ae-a8f4-87241ab72f28\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7wq8v" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593199 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f8f39e1-45f2-4870-b7c6-03edd55ebdc1-serving-cert\") pod \"service-ca-operator-777779d784-z8pfh\" (UID: \"0f8f39e1-45f2-4870-b7c6-03edd55ebdc1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593221 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d86vz\" (UniqueName: \"kubernetes.io/projected/73a0c3be-9995-4143-8a8a-13d00ff3e702-kube-api-access-d86vz\") pod \"cluster-samples-operator-665b6dd947-rhnsm\" (UID: \"73a0c3be-9995-4143-8a8a-13d00ff3e702\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593250 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e4afd2ae-2106-47f9-b516-a2987c0d359d-certs\") pod \"machine-config-server-gh9gt\" (UID: \"e4afd2ae-2106-47f9-b516-a2987c0d359d\") " pod="openshift-machine-config-operator/machine-config-server-gh9gt" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593270 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-csi-data-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593315 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxc6j\" (UniqueName: \"kubernetes.io/projected/e053a129-b32f-4092-831a-db4052fad241-kube-api-access-mxc6j\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593350 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/07333151-cefa-4d07-aeb0-e88764760bfc-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hmpjz\" (UID: \"07333151-cefa-4d07-aeb0-e88764760bfc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593372 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v656\" (UniqueName: \"kubernetes.io/projected/e4afd2ae-2106-47f9-b516-a2987c0d359d-kube-api-access-6v656\") pod \"machine-config-server-gh9gt\" (UID: \"e4afd2ae-2106-47f9-b516-a2987c0d359d\") " pod="openshift-machine-config-operator/machine-config-server-gh9gt" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593392 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/50aa3e37-a2f0-418a-ae62-70b46bd19ed2-metrics-tls\") pod \"dns-default-tkjfz\" (UID: \"50aa3e37-a2f0-418a-ae62-70b46bd19ed2\") " pod="openshift-dns/dns-default-tkjfz" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593412 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7v27\" (UniqueName: \"kubernetes.io/projected/50aa3e37-a2f0-418a-ae62-70b46bd19ed2-kube-api-access-r7v27\") pod \"dns-default-tkjfz\" (UID: \"50aa3e37-a2f0-418a-ae62-70b46bd19ed2\") " pod="openshift-dns/dns-default-tkjfz" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593440 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a962f3e-9813-4fe0-81cc-86faebfc6446-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8gbn9\" (UID: \"0a962f3e-9813-4fe0-81cc-86faebfc6446\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593470 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/feee8664-38cf-4761-b00f-7b9551d7916a-webhook-cert\") pod \"packageserver-d55dfcdfc-7wrmm\" (UID: \"feee8664-38cf-4761-b00f-7b9551d7916a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593494 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39e32a09-8172-443c-bd56-00a536a06de2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r5svl\" (UID: \"39e32a09-8172-443c-bd56-00a536a06de2\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593515 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwh2f\" (UniqueName: \"kubernetes.io/projected/90a5318c-96de-40ae-a8f4-87241ab72f28-kube-api-access-xwh2f\") pod \"control-plane-machine-set-operator-78cbb6b69f-7wq8v\" (UID: \"90a5318c-96de-40ae-a8f4-87241ab72f28\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7wq8v" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593536 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1-srv-cert\") pod \"olm-operator-6b444d44fb-wv7l2\" (UID: \"4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593557 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbxhf\" (UniqueName: \"kubernetes.io/projected/0a531689-8a1f-425d-a966-d4cbb209b38a-kube-api-access-cbxhf\") pod \"catalog-operator-68c6474976-klhkk\" (UID: \"0a531689-8a1f-425d-a966-d4cbb209b38a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593577 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a962f3e-9813-4fe0-81cc-86faebfc6446-proxy-tls\") pod \"machine-config-operator-74547568cd-8gbn9\" (UID: \"0a962f3e-9813-4fe0-81cc-86faebfc6446\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593600 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/531294e8-2e33-4f1f-848a-b2d19d8e6102-secret-volume\") pod \"collect-profiles-29395530-gwk6l\" (UID: \"531294e8-2e33-4f1f-848a-b2d19d8e6102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593622 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b565103-d77f-4ab0-b1cb-82d0912b4984-config\") pod \"kube-controller-manager-operator-78b949d7b-h8wcg\" (UID: \"8b565103-d77f-4ab0-b1cb-82d0912b4984\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593646 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/99ff329a-9fc3-4e73-9c02-abad7af09113-proxy-tls\") pod \"machine-config-controller-84d6567774-v694d\" (UID: \"99ff329a-9fc3-4e73-9c02-abad7af09113\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593675 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5lpb\" (UniqueName: \"kubernetes.io/projected/0f8f39e1-45f2-4870-b7c6-03edd55ebdc1-kube-api-access-v5lpb\") pod \"service-ca-operator-777779d784-z8pfh\" (UID: \"0f8f39e1-45f2-4870-b7c6-03edd55ebdc1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593696 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vntkd\" (UniqueName: \"kubernetes.io/projected/531294e8-2e33-4f1f-848a-b2d19d8e6102-kube-api-access-vntkd\") pod \"collect-profiles-29395530-gwk6l\" (UID: \"531294e8-2e33-4f1f-848a-b2d19d8e6102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593716 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9mfv\" (UniqueName: \"kubernetes.io/projected/39e32a09-8172-443c-bd56-00a536a06de2-kube-api-access-v9mfv\") pod \"marketplace-operator-79b997595-r5svl\" (UID: \"39e32a09-8172-443c-bd56-00a536a06de2\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593735 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c789\" (UniqueName: \"kubernetes.io/projected/feee8664-38cf-4761-b00f-7b9551d7916a-kube-api-access-9c789\") pod \"packageserver-d55dfcdfc-7wrmm\" (UID: \"feee8664-38cf-4761-b00f-7b9551d7916a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593755 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvz2c\" (UniqueName: \"kubernetes.io/projected/07897227-3975-4a8e-88c7-af39b89133af-kube-api-access-kvz2c\") pod \"multus-admission-controller-857f4d67dd-z8jjw\" (UID: \"07897227-3975-4a8e-88c7-af39b89133af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z8jjw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593793 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-plugins-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593821 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98hw4\" (UniqueName: \"kubernetes.io/projected/88a7ce6c-1fed-43f5-82c6-44b4fac52dad-kube-api-access-98hw4\") pod \"migrator-59844c95c7-tz7pm\" (UID: \"88a7ce6c-1fed-43f5-82c6-44b4fac52dad\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tz7pm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593850 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f8f39e1-45f2-4870-b7c6-03edd55ebdc1-config\") pod \"service-ca-operator-777779d784-z8pfh\" (UID: \"0f8f39e1-45f2-4870-b7c6-03edd55ebdc1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593882 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sxzs\" (UniqueName: \"kubernetes.io/projected/56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f-kube-api-access-6sxzs\") pod \"service-ca-9c57cc56f-8mnf4\" (UID: \"56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mnf4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593902 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-mountpoint-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593933 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbmvq\" (UniqueName: \"kubernetes.io/projected/4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1-kube-api-access-zbmvq\") pod \"olm-operator-6b444d44fb-wv7l2\" (UID: \"4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.593966 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07897227-3975-4a8e-88c7-af39b89133af-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z8jjw\" (UID: \"07897227-3975-4a8e-88c7-af39b89133af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z8jjw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.594000 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/feee8664-38cf-4761-b00f-7b9551d7916a-tmpfs\") pod \"packageserver-d55dfcdfc-7wrmm\" (UID: \"feee8664-38cf-4761-b00f-7b9551d7916a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.594022 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blstb\" (UniqueName: \"kubernetes.io/projected/0a962f3e-9813-4fe0-81cc-86faebfc6446-kube-api-access-blstb\") pod \"machine-config-operator-74547568cd-8gbn9\" (UID: \"0a962f3e-9813-4fe0-81cc-86faebfc6446\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.594049 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlzpk\" (UniqueName: \"kubernetes.io/projected/07333151-cefa-4d07-aeb0-e88764760bfc-kube-api-access-tlzpk\") pod \"package-server-manager-789f6589d5-hmpjz\" (UID: \"07333151-cefa-4d07-aeb0-e88764760bfc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.594105 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5dkw\" (UniqueName: \"kubernetes.io/projected/99ff329a-9fc3-4e73-9c02-abad7af09113-kube-api-access-n5dkw\") pod \"machine-config-controller-84d6567774-v694d\" (UID: \"99ff329a-9fc3-4e73-9c02-abad7af09113\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.594135 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b565103-d77f-4ab0-b1cb-82d0912b4984-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h8wcg\" (UID: \"8b565103-d77f-4ab0-b1cb-82d0912b4984\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.594168 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99ff329a-9fc3-4e73-9c02-abad7af09113-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-v694d\" (UID: \"99ff329a-9fc3-4e73-9c02-abad7af09113\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.594200 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0a962f3e-9813-4fe0-81cc-86faebfc6446-images\") pod \"machine-config-operator-74547568cd-8gbn9\" (UID: \"0a962f3e-9813-4fe0-81cc-86faebfc6446\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.594222 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlc85\" (UniqueName: \"kubernetes.io/projected/561c5bff-b0cf-4d0a-8035-7898ce300a38-kube-api-access-jlc85\") pod \"ingress-canary-g5t4j\" (UID: \"561c5bff-b0cf-4d0a-8035-7898ce300a38\") " pod="openshift-ingress-canary/ingress-canary-g5t4j" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.594244 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39e32a09-8172-443c-bd56-00a536a06de2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r5svl\" (UID: \"39e32a09-8172-443c-bd56-00a536a06de2\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.594270 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-socket-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.594291 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/531294e8-2e33-4f1f-848a-b2d19d8e6102-config-volume\") pod \"collect-profiles-29395530-gwk6l\" (UID: \"531294e8-2e33-4f1f-848a-b2d19d8e6102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.594318 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wv7l2\" (UID: \"4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.597618 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8863d93-a7c1-450c-8712-45c98a7facc6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vxgr2\" (UID: \"a8863d93-a7c1-450c-8712-45c98a7facc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.598088 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-mountpoint-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: E1121 13:35:03.598289 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:04.098265181 +0000 UTC m=+180.824679988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.599672 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f-signing-cabundle\") pod \"service-ca-9c57cc56f-8mnf4\" (UID: \"56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mnf4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.601266 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1-srv-cert\") pod \"olm-operator-6b444d44fb-wv7l2\" (UID: \"4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.602392 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50aa3e37-a2f0-418a-ae62-70b46bd19ed2-config-volume\") pod \"dns-default-tkjfz\" (UID: \"50aa3e37-a2f0-418a-ae62-70b46bd19ed2\") " pod="openshift-dns/dns-default-tkjfz" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.602779 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/feee8664-38cf-4761-b00f-7b9551d7916a-tmpfs\") pod \"packageserver-d55dfcdfc-7wrmm\" (UID: \"feee8664-38cf-4761-b00f-7b9551d7916a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.602995 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a962f3e-9813-4fe0-81cc-86faebfc6446-proxy-tls\") pod \"machine-config-operator-74547568cd-8gbn9\" (UID: \"0a962f3e-9813-4fe0-81cc-86faebfc6446\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.603039 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-csi-data-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.603222 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a962f3e-9813-4fe0-81cc-86faebfc6446-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8gbn9\" (UID: \"0a962f3e-9813-4fe0-81cc-86faebfc6446\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.603774 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b565103-d77f-4ab0-b1cb-82d0912b4984-config\") pod \"kube-controller-manager-operator-78b949d7b-h8wcg\" (UID: \"8b565103-d77f-4ab0-b1cb-82d0912b4984\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.603823 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f8f39e1-45f2-4870-b7c6-03edd55ebdc1-serving-cert\") pod \"service-ca-operator-777779d784-z8pfh\" (UID: \"0f8f39e1-45f2-4870-b7c6-03edd55ebdc1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.603863 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/90a5318c-96de-40ae-a8f4-87241ab72f28-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7wq8v\" (UID: \"90a5318c-96de-40ae-a8f4-87241ab72f28\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7wq8v" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.604084 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-registration-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.604120 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07897227-3975-4a8e-88c7-af39b89133af-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z8jjw\" (UID: \"07897227-3975-4a8e-88c7-af39b89133af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z8jjw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.604400 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99ff329a-9fc3-4e73-9c02-abad7af09113-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-v694d\" (UID: \"99ff329a-9fc3-4e73-9c02-abad7af09113\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.604683 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-socket-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.604934 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wv7l2\" (UID: \"4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.605129 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0a962f3e-9813-4fe0-81cc-86faebfc6446-images\") pod \"machine-config-operator-74547568cd-8gbn9\" (UID: \"0a962f3e-9813-4fe0-81cc-86faebfc6446\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.605280 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e053a129-b32f-4092-831a-db4052fad241-plugins-dir\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.605520 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/531294e8-2e33-4f1f-848a-b2d19d8e6102-config-volume\") pod \"collect-profiles-29395530-gwk6l\" (UID: \"531294e8-2e33-4f1f-848a-b2d19d8e6102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.605542 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b565103-d77f-4ab0-b1cb-82d0912b4984-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h8wcg\" (UID: \"8b565103-d77f-4ab0-b1cb-82d0912b4984\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.605914 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f8f39e1-45f2-4870-b7c6-03edd55ebdc1-config\") pod \"service-ca-operator-777779d784-z8pfh\" (UID: \"0f8f39e1-45f2-4870-b7c6-03edd55ebdc1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.606608 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e4afd2ae-2106-47f9-b516-a2987c0d359d-node-bootstrap-token\") pod \"machine-config-server-gh9gt\" (UID: \"e4afd2ae-2106-47f9-b516-a2987c0d359d\") " pod="openshift-machine-config-operator/machine-config-server-gh9gt" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.607037 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/50aa3e37-a2f0-418a-ae62-70b46bd19ed2-metrics-tls\") pod \"dns-default-tkjfz\" (UID: \"50aa3e37-a2f0-418a-ae62-70b46bd19ed2\") " pod="openshift-dns/dns-default-tkjfz" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.607241 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/531294e8-2e33-4f1f-848a-b2d19d8e6102-secret-volume\") pod \"collect-profiles-29395530-gwk6l\" (UID: \"531294e8-2e33-4f1f-848a-b2d19d8e6102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.607270 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/561c5bff-b0cf-4d0a-8035-7898ce300a38-cert\") pod \"ingress-canary-g5t4j\" (UID: \"561c5bff-b0cf-4d0a-8035-7898ce300a38\") " pod="openshift-ingress-canary/ingress-canary-g5t4j" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.607441 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39e32a09-8172-443c-bd56-00a536a06de2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r5svl\" (UID: \"39e32a09-8172-443c-bd56-00a536a06de2\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.608430 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/07333151-cefa-4d07-aeb0-e88764760bfc-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hmpjz\" (UID: \"07333151-cefa-4d07-aeb0-e88764760bfc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.608670 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cz299"] Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.611164 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/feee8664-38cf-4761-b00f-7b9551d7916a-webhook-cert\") pod \"packageserver-d55dfcdfc-7wrmm\" (UID: \"feee8664-38cf-4761-b00f-7b9551d7916a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.612221 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0a531689-8a1f-425d-a966-d4cbb209b38a-profile-collector-cert\") pod \"catalog-operator-68c6474976-klhkk\" (UID: \"0a531689-8a1f-425d-a966-d4cbb209b38a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.612415 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/feee8664-38cf-4761-b00f-7b9551d7916a-apiservice-cert\") pod \"packageserver-d55dfcdfc-7wrmm\" (UID: \"feee8664-38cf-4761-b00f-7b9551d7916a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.612560 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8863d93-a7c1-450c-8712-45c98a7facc6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vxgr2\" (UID: \"a8863d93-a7c1-450c-8712-45c98a7facc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.612918 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/73a0c3be-9995-4143-8a8a-13d00ff3e702-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rhnsm\" (UID: \"73a0c3be-9995-4143-8a8a-13d00ff3e702\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.612976 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39e32a09-8172-443c-bd56-00a536a06de2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r5svl\" (UID: \"39e32a09-8172-443c-bd56-00a536a06de2\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.613355 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f-signing-key\") pod \"service-ca-9c57cc56f-8mnf4\" (UID: \"56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mnf4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.613471 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e4afd2ae-2106-47f9-b516-a2987c0d359d-certs\") pod \"machine-config-server-gh9gt\" (UID: \"e4afd2ae-2106-47f9-b516-a2987c0d359d\") " pod="openshift-machine-config-operator/machine-config-server-gh9gt" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.613526 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4fwq\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-kube-api-access-z4fwq\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.613974 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/99ff329a-9fc3-4e73-9c02-abad7af09113-proxy-tls\") pod \"machine-config-controller-84d6567774-v694d\" (UID: \"99ff329a-9fc3-4e73-9c02-abad7af09113\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.614400 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0a531689-8a1f-425d-a966-d4cbb209b38a-srv-cert\") pod \"catalog-operator-68c6474976-klhkk\" (UID: \"0a531689-8a1f-425d-a966-d4cbb209b38a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.621693 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76m7q\" (UniqueName: \"kubernetes.io/projected/d8ba03e2-eef6-44ce-908e-2606777e9fe4-kube-api-access-76m7q\") pod \"machine-approver-56656f9798-k6r5g\" (UID: \"d8ba03e2-eef6-44ce-908e-2606777e9fe4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.627679 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.644626 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mm7d4\" (UID: \"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.653086 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx"] Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.662447 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjqfl\" (UniqueName: \"kubernetes.io/projected/4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6-kube-api-access-tjqfl\") pod \"etcd-operator-b45778765-rg5ls\" (UID: \"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.665907 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5872"] Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.680356 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-bound-sa-token\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.690722 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.697026 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: E1121 13:35:03.697529 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:04.197515886 +0000 UTC m=+180.923930613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.701983 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhvcv\" (UniqueName: \"kubernetes.io/projected/da9a4b5e-bce2-48d5-9aec-e681063b19de-kube-api-access-rhvcv\") pod \"router-default-5444994796-xs2g9\" (UID: \"da9a4b5e-bce2-48d5-9aec-e681063b19de\") " pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.704409 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.739628 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bb3c26c-805d-4b41-b66b-b2b7b87583de-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7p5h\" (UID: \"2bb3c26c-805d-4b41-b66b-b2b7b87583de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.741114 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" event={"ID":"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb","Type":"ContainerStarted","Data":"6318e08eaa3d5c785e419ca1ffa419de02642530b79c459f4b6b42a30d938e9e"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.741163 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" event={"ID":"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb","Type":"ContainerStarted","Data":"493d540623c00356392c151e7d0bb656c53ba2996c513fd7189d282e0cb6ed64"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.742695 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" event={"ID":"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb","Type":"ContainerStarted","Data":"f4cff351d8b77f03492097c5c4d87fd4b4f4c8132d125028951840c4a3170ef5"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.742734 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" event={"ID":"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb","Type":"ContainerStarted","Data":"0f02772ed2f2e60a8ba8b1f22d376c1927d836a120fa52026fe0547f2e9ab315"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.742988 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.745207 4675 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-tz4xg container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.745262 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" podUID="8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.745786 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" event={"ID":"cb0a749f-6d30-4f56-aba4-10a5339044f7","Type":"ContainerStarted","Data":"f41e1379a8a60d0e95eb45e64dc25a6c1a55f93507544669e3b7710d2380a5ac"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.748553 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ms8dm" event={"ID":"6190d999-660a-44f5-a51b-cd53647289db","Type":"ContainerStarted","Data":"e862f6bcc8b98eacefebd9a4e7a1b6885964faaf4fb8409149cf7826f4984fb5"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.748576 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ms8dm" event={"ID":"6190d999-660a-44f5-a51b-cd53647289db","Type":"ContainerStarted","Data":"9238285c6e256cf59dc68b9ae137e73493ad16c7c74c2378cdd472db231d526d"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.749080 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ms8dm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.751980 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-ms8dm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.752542 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ms8dm" podUID="6190d999-660a-44f5-a51b-cd53647289db" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.752216 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" event={"ID":"c3091040-d378-4c3b-9f64-bf750e9b27f1","Type":"ContainerStarted","Data":"cc2b5fa3efb276c7eceaadb14d6597e9c344594cd3bf5759ce65f0725e92488d"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.752835 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" event={"ID":"c3091040-d378-4c3b-9f64-bf750e9b27f1","Type":"ContainerStarted","Data":"475ed8d76fad3c8376df22fee1ebf14e1312453dcfaba5e87543c0f029848938"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.755435 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" event={"ID":"8742d972-144c-43ee-8586-0901733ecb49","Type":"ContainerStarted","Data":"16689be1c946620d452d45272fb959a78082b623baf0f88097fa28bc9fb5bfee"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.757832 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cz299" event={"ID":"0d4777cf-9799-450d-a46f-5d5bedeaa706","Type":"ContainerStarted","Data":"341df6969631cd76bd1c5e1eb6f8a4c8a0ea567c4a1aa174b0b64b5a8d0889e6"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.760756 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcvdl\" (UniqueName: \"kubernetes.io/projected/ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1-kube-api-access-vcvdl\") pod \"openshift-apiserver-operator-796bbdcf4f-mdn79\" (UID: \"ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.763765 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" event={"ID":"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295","Type":"ContainerStarted","Data":"02caa0c41294c79b196d7b97ef199b3928bbc153ef83857ad5d1ba093f479562"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.763801 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" event={"ID":"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295","Type":"ContainerStarted","Data":"eae68c881ac28908c738a3bd0b5ff106e1622c9b4df98958cd72fb39e2ebc11a"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.764487 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.765900 4675 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kwl5c container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.765940 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" podUID="f9e36cf0-9784-40cc-bbe5-19cfbc5e8295" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.766163 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5kdbn" event={"ID":"a8fb0eb7-f6b1-4097-b167-c443441a28a6","Type":"ContainerStarted","Data":"603e019697d12f8ab1010be77ae0ec1558d86df7b4fb50dfe1ee6624ec65b685"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.766185 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5kdbn" event={"ID":"a8fb0eb7-f6b1-4097-b167-c443441a28a6","Type":"ContainerStarted","Data":"af5a4540a93ad5084a2a483da7c3658a2324c5c36b78258cb92b8e1ed70f6d49"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.766609 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.769229 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" event={"ID":"fa4ab0d2-e28e-4281-ba62-51165d7894e0","Type":"ContainerStarted","Data":"79888d101de96b7adb765146ac482a14c2fa810c67f3a0a931ac1cf048b8104e"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.769292 4675 patch_prober.go:28] interesting pod/console-operator-58897d9998-5kdbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.769338 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5kdbn" podUID="a8fb0eb7-f6b1-4097-b167-c443441a28a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.772007 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" event={"ID":"d16b4be5-ea4f-4d90-b2be-3e9582858283","Type":"ContainerStarted","Data":"0633c6dddd906c57e00713a2537869d0297e6bf380a4c9fbcb50b6bf374b2a6f"} Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.784504 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppdft\" (UniqueName: \"kubernetes.io/projected/d3bc6824-00ea-42b0-99f7-ca0145d2e630-kube-api-access-ppdft\") pod \"dns-operator-744455d44c-smdhk\" (UID: \"d3bc6824-00ea-42b0-99f7-ca0145d2e630\") " pod="openshift-dns-operator/dns-operator-744455d44c-smdhk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.798121 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:03 crc kubenswrapper[4675]: E1121 13:35:03.798244 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:04.298220577 +0000 UTC m=+181.024635304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.798604 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:03 crc kubenswrapper[4675]: E1121 13:35:03.798953 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:04.298938615 +0000 UTC m=+181.025353342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.803333 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sxzs\" (UniqueName: \"kubernetes.io/projected/56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f-kube-api-access-6sxzs\") pod \"service-ca-9c57cc56f-8mnf4\" (UID: \"56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mnf4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.813414 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tcjb8"] Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.815155 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8mnf4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.841250 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwh2f\" (UniqueName: \"kubernetes.io/projected/90a5318c-96de-40ae-a8f4-87241ab72f28-kube-api-access-xwh2f\") pod \"control-plane-machine-set-operator-78cbb6b69f-7wq8v\" (UID: \"90a5318c-96de-40ae-a8f4-87241ab72f28\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7wq8v" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.847204 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxc6j\" (UniqueName: \"kubernetes.io/projected/e053a129-b32f-4092-831a-db4052fad241-kube-api-access-mxc6j\") pod \"csi-hostpathplugin-9tglw\" (UID: \"e053a129-b32f-4092-831a-db4052fad241\") " pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.850436 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.853347 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7wq8v" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.863752 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbxhf\" (UniqueName: \"kubernetes.io/projected/0a531689-8a1f-425d-a966-d4cbb209b38a-kube-api-access-cbxhf\") pod \"catalog-operator-68c6474976-klhkk\" (UID: \"0a531689-8a1f-425d-a966-d4cbb209b38a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.884610 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5dkw\" (UniqueName: \"kubernetes.io/projected/99ff329a-9fc3-4e73-9c02-abad7af09113-kube-api-access-n5dkw\") pod \"machine-config-controller-84d6567774-v694d\" (UID: \"99ff329a-9fc3-4e73-9c02-abad7af09113\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.899651 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:03 crc kubenswrapper[4675]: E1121 13:35:03.900510 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:04.400477938 +0000 UTC m=+181.126892665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.901093 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v656\" (UniqueName: \"kubernetes.io/projected/e4afd2ae-2106-47f9-b516-a2987c0d359d-kube-api-access-6v656\") pod \"machine-config-server-gh9gt\" (UID: \"e4afd2ae-2106-47f9-b516-a2987c0d359d\") " pod="openshift-machine-config-operator/machine-config-server-gh9gt" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.916798 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.918458 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.921661 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d86vz\" (UniqueName: \"kubernetes.io/projected/73a0c3be-9995-4143-8a8a-13d00ff3e702-kube-api-access-d86vz\") pod \"cluster-samples-operator-665b6dd947-rhnsm\" (UID: \"73a0c3be-9995-4143-8a8a-13d00ff3e702\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.934378 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-smdhk" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.943180 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.945346 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbmvq\" (UniqueName: \"kubernetes.io/projected/4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1-kube-api-access-zbmvq\") pod \"olm-operator-6b444d44fb-wv7l2\" (UID: \"4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.955386 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv"] Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.960574 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b565103-d77f-4ab0-b1cb-82d0912b4984-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h8wcg\" (UID: \"8b565103-d77f-4ab0-b1cb-82d0912b4984\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg" Nov 21 13:35:03 crc kubenswrapper[4675]: I1121 13:35:03.982442 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blstb\" (UniqueName: \"kubernetes.io/projected/0a962f3e-9813-4fe0-81cc-86faebfc6446-kube-api-access-blstb\") pod \"machine-config-operator-74547568cd-8gbn9\" (UID: \"0a962f3e-9813-4fe0-81cc-86faebfc6446\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:03.996213 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.003964 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:04 crc kubenswrapper[4675]: E1121 13:35:04.004448 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:04.504433551 +0000 UTC m=+181.230848278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.016823 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlzpk\" (UniqueName: \"kubernetes.io/projected/07333151-cefa-4d07-aeb0-e88764760bfc-kube-api-access-tlzpk\") pod \"package-server-manager-789f6589d5-hmpjz\" (UID: \"07333151-cefa-4d07-aeb0-e88764760bfc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.032700 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.043231 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js2k5\" (UniqueName: \"kubernetes.io/projected/a8863d93-a7c1-450c-8712-45c98a7facc6-kube-api-access-js2k5\") pod \"kube-storage-version-migrator-operator-b67b599dd-vxgr2\" (UID: \"a8863d93-a7c1-450c-8712-45c98a7facc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.046585 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.060792 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.067623 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8mnf4"] Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.068131 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlc85\" (UniqueName: \"kubernetes.io/projected/561c5bff-b0cf-4d0a-8035-7898ce300a38-kube-api-access-jlc85\") pod \"ingress-canary-g5t4j\" (UID: \"561c5bff-b0cf-4d0a-8035-7898ce300a38\") " pod="openshift-ingress-canary/ingress-canary-g5t4j" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.078635 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.081633 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98hw4\" (UniqueName: \"kubernetes.io/projected/88a7ce6c-1fed-43f5-82c6-44b4fac52dad-kube-api-access-98hw4\") pod \"migrator-59844c95c7-tz7pm\" (UID: \"88a7ce6c-1fed-43f5-82c6-44b4fac52dad\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tz7pm" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.083795 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.084005 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvz2c\" (UniqueName: \"kubernetes.io/projected/07897227-3975-4a8e-88c7-af39b89133af-kube-api-access-kvz2c\") pod \"multus-admission-controller-857f4d67dd-z8jjw\" (UID: \"07897227-3975-4a8e-88c7-af39b89133af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z8jjw" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.090919 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.096799 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.100959 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5lpb\" (UniqueName: \"kubernetes.io/projected/0f8f39e1-45f2-4870-b7c6-03edd55ebdc1-kube-api-access-v5lpb\") pod \"service-ca-operator-777779d784-z8pfh\" (UID: \"0f8f39e1-45f2-4870-b7c6-03edd55ebdc1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.105197 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:04 crc kubenswrapper[4675]: E1121 13:35:04.105380 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:04.605363338 +0000 UTC m=+181.331778065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.105500 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:04 crc kubenswrapper[4675]: E1121 13:35:04.106574 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:04.606562708 +0000 UTC m=+181.332977435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.109622 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.122319 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.137141 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vntkd\" (UniqueName: \"kubernetes.io/projected/531294e8-2e33-4f1f-848a-b2d19d8e6102-kube-api-access-vntkd\") pod \"collect-profiles-29395530-gwk6l\" (UID: \"531294e8-2e33-4f1f-848a-b2d19d8e6102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.140265 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9tglw" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.148857 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-g5t4j" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.160414 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gh9gt" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.162483 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9mfv\" (UniqueName: \"kubernetes.io/projected/39e32a09-8172-443c-bd56-00a536a06de2-kube-api-access-v9mfv\") pod \"marketplace-operator-79b997595-r5svl\" (UID: \"39e32a09-8172-443c-bd56-00a536a06de2\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.184425 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c789\" (UniqueName: \"kubernetes.io/projected/feee8664-38cf-4761-b00f-7b9551d7916a-kube-api-access-9c789\") pod \"packageserver-d55dfcdfc-7wrmm\" (UID: \"feee8664-38cf-4761-b00f-7b9551d7916a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.185844 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7v27\" (UniqueName: \"kubernetes.io/projected/50aa3e37-a2f0-418a-ae62-70b46bd19ed2-kube-api-access-r7v27\") pod \"dns-default-tkjfz\" (UID: \"50aa3e37-a2f0-418a-ae62-70b46bd19ed2\") " pod="openshift-dns/dns-default-tkjfz" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.206963 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:04 crc kubenswrapper[4675]: E1121 13:35:04.207434 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:04.707378032 +0000 UTC m=+181.433792779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.207485 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:04 crc kubenswrapper[4675]: E1121 13:35:04.208203 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:04.708183942 +0000 UTC m=+181.434598669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:04 crc kubenswrapper[4675]: W1121 13:35:04.217149 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56a6efb8_f8cc_4cc9_b9ea_9c348c5fbe2f.slice/crio-6171accef5d9da3ac21747d6bdeaabba4cf2980ecc9caa16f94ba1ab2c77387c WatchSource:0}: Error finding container 6171accef5d9da3ac21747d6bdeaabba4cf2980ecc9caa16f94ba1ab2c77387c: Status 404 returned error can't find the container with id 6171accef5d9da3ac21747d6bdeaabba4cf2980ecc9caa16f94ba1ab2c77387c Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.309806 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:04 crc kubenswrapper[4675]: E1121 13:35:04.310465 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:04.810438103 +0000 UTC m=+181.536852840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.310527 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tz7pm" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.328185 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z8jjw" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.348735 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.367641 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.409410 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.412061 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:04 crc kubenswrapper[4675]: E1121 13:35:04.412483 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:04.912468087 +0000 UTC m=+181.638882814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.443729 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72"] Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.455956 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tkjfz" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.459440 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79"] Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.515539 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:04 crc kubenswrapper[4675]: E1121 13:35:04.515943 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:05.015924798 +0000 UTC m=+181.742339535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.600581 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7wq8v"] Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.617593 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:04 crc kubenswrapper[4675]: E1121 13:35:04.617918 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:05.117903191 +0000 UTC m=+181.844317928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.630317 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rg5ls"] Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.718446 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:04 crc kubenswrapper[4675]: E1121 13:35:04.719061 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:05.219026603 +0000 UTC m=+181.945441340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.740034 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-ms8dm" podStartSLOduration=139.740014439 podStartE2EDuration="2m19.740014439s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:04.738629594 +0000 UTC m=+181.465044321" watchObservedRunningTime="2025-11-21 13:35:04.740014439 +0000 UTC m=+181.466429166" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.823810 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:04 crc kubenswrapper[4675]: E1121 13:35:04.831762 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:05.331746836 +0000 UTC m=+182.058161563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.892894 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" event={"ID":"3e19a2b0-8360-4276-a5e6-d543e9941234","Type":"ContainerStarted","Data":"e9e3f2678f024a350025c50185dd8d9e150cfb8fbbd3f16d8fd4c90a6b8c6ae5"} Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.893114 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" event={"ID":"3e19a2b0-8360-4276-a5e6-d543e9941234","Type":"ContainerStarted","Data":"c04a0e42b77a5b225c4e010810621c5ea6d579d42b09b0f29cbff1704a266ace"} Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.903197 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72" event={"ID":"2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d","Type":"ContainerStarted","Data":"888cd01100afa1ef64df38e412c5705163e6a2e5adbea39a0b4bf230003a6f1c"} Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.928195 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:04 crc kubenswrapper[4675]: E1121 13:35:04.928578 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:05.4285601 +0000 UTC m=+182.154974827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.935117 4675 generic.go:334] "Generic (PLEG): container finished" podID="c3091040-d378-4c3b-9f64-bf750e9b27f1" containerID="cc2b5fa3efb276c7eceaadb14d6597e9c344594cd3bf5759ce65f0725e92488d" exitCode=0 Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.935185 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" event={"ID":"c3091040-d378-4c3b-9f64-bf750e9b27f1","Type":"ContainerDied","Data":"cc2b5fa3efb276c7eceaadb14d6597e9c344594cd3bf5759ce65f0725e92488d"} Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.947135 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" event={"ID":"8742d972-144c-43ee-8586-0901733ecb49","Type":"ContainerStarted","Data":"a04cc7b8c0daef9f26569d0514c238ef2c70f01b2e1e6cbcca7c0bd64aee7195"} Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.960347 4675 generic.go:334] "Generic (PLEG): container finished" podID="cb0a749f-6d30-4f56-aba4-10a5339044f7" containerID="6a78daf7c7706e57b97b5fa7d048f952f9f2cac3bc4efa9b6c8d8608244bb114" exitCode=0 Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.960428 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" event={"ID":"cb0a749f-6d30-4f56-aba4-10a5339044f7","Type":"ContainerDied","Data":"6a78daf7c7706e57b97b5fa7d048f952f9f2cac3bc4efa9b6c8d8608244bb114"} Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.963952 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" event={"ID":"d16b4be5-ea4f-4d90-b2be-3e9582858283","Type":"ContainerStarted","Data":"65efded9cd49d4a82730dd4387e26efc3f3d8f452eb52b9357bde651ecbfec00"} Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.964333 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.965388 4675 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-k5872 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.965422 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" podUID="d16b4be5-ea4f-4d90-b2be-3e9582858283" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.966794 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7wq8v" event={"ID":"90a5318c-96de-40ae-a8f4-87241ab72f28","Type":"ContainerStarted","Data":"0a95ad937431ff87588b2c5f0cc558c6bd159dfaaa09f51b71117fc81a81c62e"} Nov 21 13:35:04 crc kubenswrapper[4675]: I1121 13:35:04.996695 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" event={"ID":"eb5a0d6b-3347-4d29-90a5-f554c65e5ddb","Type":"ContainerStarted","Data":"435d0df7405a958907835c24b7ac6104600d80470a2b345dfc7d4e27ffad6e0b"} Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.007784 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gh9gt" event={"ID":"e4afd2ae-2106-47f9-b516-a2987c0d359d","Type":"ContainerStarted","Data":"c9b89e2643218d348da9752a0638144bd6d27dedf1f7896361d4e94374945ed8"} Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.011662 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv" event={"ID":"9d459863-1177-489f-94af-1cc9c352e509","Type":"ContainerStarted","Data":"c41260a390581bcde2e2d09f7e84df8f052a8cd0c9f049da1927d6d73a35a437"} Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.011714 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv" event={"ID":"9d459863-1177-489f-94af-1cc9c352e509","Type":"ContainerStarted","Data":"283e1343658a86ecc84e243573820a36e2b238139cf8c6f27a2f04d16d98d6bb"} Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.015088 4675 generic.go:334] "Generic (PLEG): container finished" podID="fa4ab0d2-e28e-4281-ba62-51165d7894e0" containerID="af6407902856d7f08e240e1eb3abeb3de21c250ceab6ae969b3c6146cac34277" exitCode=0 Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.015157 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" event={"ID":"fa4ab0d2-e28e-4281-ba62-51165d7894e0","Type":"ContainerDied","Data":"af6407902856d7f08e240e1eb3abeb3de21c250ceab6ae969b3c6146cac34277"} Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.019639 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cz299" event={"ID":"0d4777cf-9799-450d-a46f-5d5bedeaa706","Type":"ContainerStarted","Data":"8ccdf547d3c6c113c3305af5a174b1aed79f6ed01ef6c42a524275de651beaf1"} Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.029585 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:05 crc kubenswrapper[4675]: E1121 13:35:05.029918 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:05.529901187 +0000 UTC m=+182.256315914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.045804 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg"] Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.052661 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h"] Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.055168 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8mnf4" event={"ID":"56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f","Type":"ContainerStarted","Data":"f59b613748803db520fe02e6d4b34815ef94ee62723b4f6ee3be03c77a5709bd"} Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.055214 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8mnf4" event={"ID":"56a6efb8-f8cc-4cc9-b9ea-9c348c5fbe2f","Type":"ContainerStarted","Data":"6171accef5d9da3ac21747d6bdeaabba4cf2980ecc9caa16f94ba1ab2c77387c"} Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.057012 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz"] Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.063135 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79" event={"ID":"ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1","Type":"ContainerStarted","Data":"702221cce5707c48385ee5298f42845d31682ee3e371733df1c7caab476864a4"} Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.064718 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xs2g9" event={"ID":"da9a4b5e-bce2-48d5-9aec-e681063b19de","Type":"ContainerStarted","Data":"e622f86241cf1242bfd930c9f42ca71bca71891067b356afab697f1189ec0320"} Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.064830 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xs2g9" event={"ID":"da9a4b5e-bce2-48d5-9aec-e681063b19de","Type":"ContainerStarted","Data":"81e74dd0223a13e53a54ea047ac66a9ef6250d2e8705945a527bf9b11f56a334"} Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.074685 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" event={"ID":"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6","Type":"ContainerStarted","Data":"2e43217edb33384d66e156c127b6bfaf8792d073f60e712a56a23ae36edc1887"} Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.083725 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" event={"ID":"d8ba03e2-eef6-44ce-908e-2606777e9fe4","Type":"ContainerStarted","Data":"62b8371951f6c78d1fbb1b8700255c8540ea2fb96d254a62b6addfaa473de784"} Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.083790 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" event={"ID":"d8ba03e2-eef6-44ce-908e-2606777e9fe4","Type":"ContainerStarted","Data":"1ff589ff2c82fd72e0e40d31157ee3afdffb4240d47bbf8b9a9d2be094b67944"} Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.085604 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-ms8dm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.085641 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ms8dm" podUID="6190d999-660a-44f5-a51b-cd53647289db" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.101572 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-smdhk"] Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.108732 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4"] Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.116211 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9"] Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.139020 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:05 crc kubenswrapper[4675]: E1121 13:35:05.142657 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:05.642540708 +0000 UTC m=+182.368955435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.146619 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:05 crc kubenswrapper[4675]: E1121 13:35:05.153015 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:05.65299821 +0000 UTC m=+182.379412937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.187593 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-5kdbn" podStartSLOduration=140.187572955 podStartE2EDuration="2m20.187572955s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:05.17937728 +0000 UTC m=+181.905792027" watchObservedRunningTime="2025-11-21 13:35:05.187572955 +0000 UTC m=+181.913987692" Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.251791 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:05 crc kubenswrapper[4675]: E1121 13:35:05.253642 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:05.753623679 +0000 UTC m=+182.480038406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.281943 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.354888 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:05 crc kubenswrapper[4675]: E1121 13:35:05.355502 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:05.855302545 +0000 UTC m=+182.581717322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.455000 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" podStartSLOduration=140.454981101 podStartE2EDuration="2m20.454981101s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:05.453921234 +0000 UTC m=+182.180335971" watchObservedRunningTime="2025-11-21 13:35:05.454981101 +0000 UTC m=+182.181395828" Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.456209 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:05 crc kubenswrapper[4675]: E1121 13:35:05.456399 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:05.956383746 +0000 UTC m=+182.682798473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.456656 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:05 crc kubenswrapper[4675]: E1121 13:35:05.456969 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:05.95695339 +0000 UTC m=+182.683368147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.558612 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:05 crc kubenswrapper[4675]: E1121 13:35:05.559270 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:06.059254512 +0000 UTC m=+182.785669239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.576366 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-5kdbn" Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.588777 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" podStartSLOduration=140.588757001 podStartE2EDuration="2m20.588757001s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:05.586441923 +0000 UTC m=+182.312856650" watchObservedRunningTime="2025-11-21 13:35:05.588757001 +0000 UTC m=+182.315171728" Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.590866 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.641251 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk"] Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.644179 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-v694d"] Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.647615 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2"] Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.651712 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9tglw"] Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.663742 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:05 crc kubenswrapper[4675]: E1121 13:35:05.664181 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:06.164165829 +0000 UTC m=+182.890580556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.706018 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.724800 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xs2g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:35:05 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Nov 21 13:35:05 crc kubenswrapper[4675]: [+]process-running ok Nov 21 13:35:05 crc kubenswrapper[4675]: healthz check failed Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.724839 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs2g9" podUID="da9a4b5e-bce2-48d5-9aec-e681063b19de" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:35:05 crc kubenswrapper[4675]: W1121 13:35:05.725546 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8863d93_a7c1_450c_8712_45c98a7facc6.slice/crio-9d36ef0e3b81699fa2858c12d63f7a0d47e3dd8059042d25dc6be0e7df8ec9f4 WatchSource:0}: Error finding container 9d36ef0e3b81699fa2858c12d63f7a0d47e3dd8059042d25dc6be0e7df8ec9f4: Status 404 returned error can't find the container with id 9d36ef0e3b81699fa2858c12d63f7a0d47e3dd8059042d25dc6be0e7df8ec9f4 Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.767575 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:05 crc kubenswrapper[4675]: E1121 13:35:05.768314 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:06.268291596 +0000 UTC m=+182.994706323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.770187 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:05 crc kubenswrapper[4675]: E1121 13:35:05.771363 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:06.271332232 +0000 UTC m=+182.997746959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.864418 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" podStartSLOduration=140.864392612 podStartE2EDuration="2m20.864392612s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:05.807698183 +0000 UTC m=+182.534112910" watchObservedRunningTime="2025-11-21 13:35:05.864392612 +0000 UTC m=+182.590807349" Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.883696 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-cz299" podStartSLOduration=140.883679465 podStartE2EDuration="2m20.883679465s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:05.863819438 +0000 UTC m=+182.590234155" watchObservedRunningTime="2025-11-21 13:35:05.883679465 +0000 UTC m=+182.610094192" Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.886167 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2"] Nov 21 13:35:05 crc kubenswrapper[4675]: E1121 13:35:05.895213 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:06.395187913 +0000 UTC m=+183.121602640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.895155 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.895823 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:05 crc kubenswrapper[4675]: E1121 13:35:05.896127 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:06.396117447 +0000 UTC m=+183.122532174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.924592 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh"] Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.975675 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tkjfz"] Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.977629 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r5svl"] Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.977832 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-tcjb8" podStartSLOduration=140.977822752 podStartE2EDuration="2m20.977822752s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:05.975567036 +0000 UTC m=+182.701981763" watchObservedRunningTime="2025-11-21 13:35:05.977822752 +0000 UTC m=+182.704237479" Nov 21 13:35:05 crc kubenswrapper[4675]: I1121 13:35:05.990922 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm"] Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.000544 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:06 crc kubenswrapper[4675]: E1121 13:35:06.000924 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:06.50091006 +0000 UTC m=+183.227324797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.052412 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-8mnf4" podStartSLOduration=141.05239618 podStartE2EDuration="2m21.05239618s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.051459546 +0000 UTC m=+182.777874273" watchObservedRunningTime="2025-11-21 13:35:06.05239618 +0000 UTC m=+182.778810897" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.073286 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm"] Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.100588 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z8jjw"] Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.100626 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l"] Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.104862 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:06 crc kubenswrapper[4675]: E1121 13:35:06.105189 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:06.605177331 +0000 UTC m=+183.331592058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.112571 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tz7pm"] Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.113584 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwz5c" podStartSLOduration=141.113567271 podStartE2EDuration="2m21.113567271s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.088439092 +0000 UTC m=+182.814853829" watchObservedRunningTime="2025-11-21 13:35:06.113567271 +0000 UTC m=+182.839981998" Nov 21 13:35:06 crc kubenswrapper[4675]: W1121 13:35:06.115171 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07897227_3975_4a8e_88c7_af39b89133af.slice/crio-390912fc4cba5f221821c26835c73b20234b00a162b9910c2b35fdeb57d4e93f WatchSource:0}: Error finding container 390912fc4cba5f221821c26835c73b20234b00a162b9910c2b35fdeb57d4e93f: Status 404 returned error can't find the container with id 390912fc4cba5f221821c26835c73b20234b00a162b9910c2b35fdeb57d4e93f Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.140308 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2" event={"ID":"a8863d93-a7c1-450c-8712-45c98a7facc6","Type":"ContainerStarted","Data":"9d36ef0e3b81699fa2858c12d63f7a0d47e3dd8059042d25dc6be0e7df8ec9f4"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.144992 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-g5t4j"] Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.168797 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz" event={"ID":"07333151-cefa-4d07-aeb0-e88764760bfc","Type":"ContainerStarted","Data":"6903504bd5aca527e3d32442d98ebe3d21475afeb50b1a911433050a0e5b40ce"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.168944 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz" event={"ID":"07333151-cefa-4d07-aeb0-e88764760bfc","Type":"ContainerStarted","Data":"a3c268ebe0ff8ee55b82e8db8cf43a8b055760422cd33b6d5b552673de7f4dbc"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.183301 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h" event={"ID":"2bb3c26c-805d-4b41-b66b-b2b7b87583de","Type":"ContainerStarted","Data":"e44239f86f22b18b23714d289c700c326beb91b58d8cd40e33f28ca3a04c37c1"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.183437 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h" event={"ID":"2bb3c26c-805d-4b41-b66b-b2b7b87583de","Type":"ContainerStarted","Data":"b775dd990793a882908ce1d883201e6606b8ff6c808397d3962947b9dfc6f9b4"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.193679 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" event={"ID":"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472","Type":"ContainerStarted","Data":"1ff9d1e61a204d5c0c1c60158978563fddb958c983be51b189d85a37aa8b6d9b"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.193728 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" event={"ID":"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472","Type":"ContainerStarted","Data":"0b4fdcaab790bc28f4bcbdc4996d4279d0023824ef001695b260853520a91305"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.195319 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" event={"ID":"cb0a749f-6d30-4f56-aba4-10a5339044f7","Type":"ContainerStarted","Data":"b3b0cc1eb506265d334e47cd64bf47d2cfac9968c3ac8cc8d315b34ab6b427b5"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.198631 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" event={"ID":"4b37fc2b-a6e2-45e3-bc11-c0629ff5ebc6","Type":"ContainerStarted","Data":"ab6df758afb104631f541c88ade83830700d7e6187e608a8d685374b1868f14a"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.202762 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tkjfz" event={"ID":"50aa3e37-a2f0-418a-ae62-70b46bd19ed2","Type":"ContainerStarted","Data":"e11d20b90a7fad242b95cf79fd28e650723f500bfd7222ce522d9bfe75fe62a1"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.204233 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79" event={"ID":"ad4e0c5e-d605-4b6b-9c57-c5ee19bd1ef1","Type":"ContainerStarted","Data":"0eeff624df2075107a83da33413c85beb636b81dc22ff268751da4e0a2f4ced8"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.209153 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:06 crc kubenswrapper[4675]: E1121 13:35:06.209621 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:06.709605646 +0000 UTC m=+183.436020373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.255756 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" event={"ID":"d8ba03e2-eef6-44ce-908e-2606777e9fe4","Type":"ContainerStarted","Data":"a8a62de36f300ff0feeca80ee41fc03d7ff2816198b969e45fa9c1927315bbdb"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.295637 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" event={"ID":"fa4ab0d2-e28e-4281-ba62-51165d7894e0","Type":"ContainerStarted","Data":"1f8679e84aaabd267ddfa2b11da5108f053ded388767bf2714995935c7266788"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.331607 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.332614 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" event={"ID":"0a531689-8a1f-425d-a966-d4cbb209b38a","Type":"ContainerStarted","Data":"a599f9d7fb2614707688b45bb72c1914ec5984b731b95edd8ab7b9f4e74e7f3f"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.332654 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" Nov 21 13:35:06 crc kubenswrapper[4675]: E1121 13:35:06.334726 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:06.834713119 +0000 UTC m=+183.561127846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.344341 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-55wvx" podStartSLOduration=141.344321489 podStartE2EDuration="2m21.344321489s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.332874933 +0000 UTC m=+183.059289680" watchObservedRunningTime="2025-11-21 13:35:06.344321489 +0000 UTC m=+183.070736216" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.347292 4675 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-klhkk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.358424 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" podUID="0a531689-8a1f-425d-a966-d4cbb209b38a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.362295 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" event={"ID":"c3091040-d378-4c3b-9f64-bf750e9b27f1","Type":"ContainerStarted","Data":"4e80a938db75ee829a848f557ec4194b0640630ca00a681369257c03d1b841c8"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.362824 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.364542 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gh9gt" event={"ID":"e4afd2ae-2106-47f9-b516-a2987c0d359d","Type":"ContainerStarted","Data":"5ef66536ae6649fe404d3edc3e8f5d1ffcc6e74db3f4d03ed416187b63fe477c"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.367129 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72" event={"ID":"2f2b9592-b8bc-41ca-82e1-91a9df9e5e3d","Type":"ContainerStarted","Data":"94c6ae63a368696c0d6c17e24004c0cc1c73c720c18964384d37ff3a61aadf11"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.370119 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9tglw" event={"ID":"e053a129-b32f-4092-831a-db4052fad241","Type":"ContainerStarted","Data":"5e6816e9edd01f7a4e7cafbdaa040089fae19c5131882f954c77228275980c29"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.371145 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" event={"ID":"0a962f3e-9813-4fe0-81cc-86faebfc6446","Type":"ContainerStarted","Data":"694b3ef6678e6bc4661577b971a169aed2180c68a8dc20f7d40abae28d6bac4d"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.371179 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" event={"ID":"0a962f3e-9813-4fe0-81cc-86faebfc6446","Type":"ContainerStarted","Data":"6394659e8d7900f88261909b7e742c848fde8f26ffb00b7b56415ad17eea50ea"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.372379 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh" event={"ID":"0f8f39e1-45f2-4870-b7c6-03edd55ebdc1","Type":"ContainerStarted","Data":"edcf91fb8f48761921fa4792e3b33547fcaec5d3fe0f4ac12eabc8b3ff7b45ae"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.373328 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-smdhk" event={"ID":"d3bc6824-00ea-42b0-99f7-ca0145d2e630","Type":"ContainerStarted","Data":"27582844df9f2d076206dadacc70c3132934df6ba73d213e25d875d21337dc2b"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.373367 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-smdhk" event={"ID":"d3bc6824-00ea-42b0-99f7-ca0145d2e630","Type":"ContainerStarted","Data":"0cf4acd55af380197cac602bf3d10f691966846247b56543ee1a71b447f6183f"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.374237 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7wq8v" event={"ID":"90a5318c-96de-40ae-a8f4-87241ab72f28","Type":"ContainerStarted","Data":"0b91bfc5e864ad51669d63b9a5416b02605dbcecff48636e6df31a077863333c"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.375643 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg" event={"ID":"8b565103-d77f-4ab0-b1cb-82d0912b4984","Type":"ContainerStarted","Data":"13c06e495a1c452e108314e4f437957d0e2917996ec6c4a194713865c6907c7a"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.375668 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg" event={"ID":"8b565103-d77f-4ab0-b1cb-82d0912b4984","Type":"ContainerStarted","Data":"ea34d22e52e8f255336c1537fd519d04beb06bade382681b6c3f26429cc32151"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.409280 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" event={"ID":"39e32a09-8172-443c-bd56-00a536a06de2","Type":"ContainerStarted","Data":"1853a90637511afc095925eb357d0e47f8ff6acac91e1090a661e65eaff455d1"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.416079 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" event={"ID":"99ff329a-9fc3-4e73-9c02-abad7af09113","Type":"ContainerStarted","Data":"c1189309acaec68b2fc106dc4055b53b7a8f86090b178b416314743cff50f9c8"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.425254 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" event={"ID":"4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1","Type":"ContainerStarted","Data":"bc6208abfb45b4df7d6c31326a4c7ea828275f2313143eb861594f670ada04cd"} Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.428981 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-ms8dm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.429027 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ms8dm" podUID="6190d999-660a-44f5-a51b-cd53647289db" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.433411 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:06 crc kubenswrapper[4675]: E1121 13:35:06.438866 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:06.938839786 +0000 UTC m=+183.665254533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.466556 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.468964 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k6r5g" podStartSLOduration=141.46894514 podStartE2EDuration="2m21.46894514s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.46494563 +0000 UTC m=+183.191360357" watchObservedRunningTime="2025-11-21 13:35:06.46894514 +0000 UTC m=+183.195359867" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.538140 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:06 crc kubenswrapper[4675]: E1121 13:35:06.560763 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:07.060748448 +0000 UTC m=+183.787163175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.600986 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s44fv" podStartSLOduration=141.600966484 podStartE2EDuration="2m21.600966484s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.55483842 +0000 UTC m=+183.281253147" watchObservedRunningTime="2025-11-21 13:35:06.600966484 +0000 UTC m=+183.327381211" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.637394 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xs2g9" podStartSLOduration=141.637373966 podStartE2EDuration="2m21.637373966s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.604809801 +0000 UTC m=+183.331224528" watchObservedRunningTime="2025-11-21 13:35:06.637373966 +0000 UTC m=+183.363788703" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.639954 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:06 crc kubenswrapper[4675]: E1121 13:35:06.640485 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:07.140465963 +0000 UTC m=+183.866880690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.679849 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdn79" podStartSLOduration=141.679829369 podStartE2EDuration="2m21.679829369s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.639353895 +0000 UTC m=+183.365768622" watchObservedRunningTime="2025-11-21 13:35:06.679829369 +0000 UTC m=+183.406244096" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.714304 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xs2g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:35:06 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Nov 21 13:35:06 crc kubenswrapper[4675]: [+]process-running ok Nov 21 13:35:06 crc kubenswrapper[4675]: healthz check failed Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.714371 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs2g9" podUID="da9a4b5e-bce2-48d5-9aec-e681063b19de" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.715428 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-rg5ls" podStartSLOduration=141.71540792 podStartE2EDuration="2m21.71540792s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.713589134 +0000 UTC m=+183.440003861" watchObservedRunningTime="2025-11-21 13:35:06.71540792 +0000 UTC m=+183.441822647" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.716332 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8wcg" podStartSLOduration=141.716325143 podStartE2EDuration="2m21.716325143s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.681795428 +0000 UTC m=+183.408210145" watchObservedRunningTime="2025-11-21 13:35:06.716325143 +0000 UTC m=+183.442739870" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.746677 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:06 crc kubenswrapper[4675]: E1121 13:35:06.747266 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:07.247249697 +0000 UTC m=+183.973664424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.784333 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-gh9gt" podStartSLOduration=5.784308405 podStartE2EDuration="5.784308405s" podCreationTimestamp="2025-11-21 13:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.76095245 +0000 UTC m=+183.487367177" watchObservedRunningTime="2025-11-21 13:35:06.784308405 +0000 UTC m=+183.510723132" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.831683 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" podStartSLOduration=141.831663731 podStartE2EDuration="2m21.831663731s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.830886191 +0000 UTC m=+183.557300918" watchObservedRunningTime="2025-11-21 13:35:06.831663731 +0000 UTC m=+183.558078458" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.831868 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7wq8v" podStartSLOduration=141.831864396 podStartE2EDuration="2m21.831864396s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.807236029 +0000 UTC m=+183.533650756" watchObservedRunningTime="2025-11-21 13:35:06.831864396 +0000 UTC m=+183.558279123" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.847786 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:06 crc kubenswrapper[4675]: E1121 13:35:06.848120 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:07.348054951 +0000 UTC m=+184.074469678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.947894 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" podStartSLOduration=141.947872801 podStartE2EDuration="2m21.947872801s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.944442685 +0000 UTC m=+183.670857412" watchObservedRunningTime="2025-11-21 13:35:06.947872801 +0000 UTC m=+183.674287528" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.948007 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkn72" podStartSLOduration=141.948002844 podStartE2EDuration="2m21.948002844s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.887234402 +0000 UTC m=+183.613649129" watchObservedRunningTime="2025-11-21 13:35:06.948002844 +0000 UTC m=+183.674417571" Nov 21 13:35:06 crc kubenswrapper[4675]: I1121 13:35:06.949012 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:06 crc kubenswrapper[4675]: E1121 13:35:06.949354 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:07.449339407 +0000 UTC m=+184.175754134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.017465 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7p5h" podStartSLOduration=142.017449643 podStartE2EDuration="2m22.017449643s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:06.982050546 +0000 UTC m=+183.708465273" watchObservedRunningTime="2025-11-21 13:35:07.017449643 +0000 UTC m=+183.743864370" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.055349 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:07 crc kubenswrapper[4675]: E1121 13:35:07.055754 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:07.555734751 +0000 UTC m=+184.282149478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.156814 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:07 crc kubenswrapper[4675]: E1121 13:35:07.157337 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:07.657326235 +0000 UTC m=+184.383740952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.261481 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.261762 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs\") pod \"network-metrics-daemon-djn7k\" (UID: \"3034a641-e8c3-4303-bb0e-1da29de3a41b\") " pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:35:07 crc kubenswrapper[4675]: E1121 13:35:07.261950 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:07.761927034 +0000 UTC m=+184.488341761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.263805 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.291887 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3034a641-e8c3-4303-bb0e-1da29de3a41b-metrics-certs\") pod \"network-metrics-daemon-djn7k\" (UID: \"3034a641-e8c3-4303-bb0e-1da29de3a41b\") " pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.363311 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:07 crc kubenswrapper[4675]: E1121 13:35:07.363633 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:07.86362199 +0000 UTC m=+184.590036717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.465757 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:07 crc kubenswrapper[4675]: E1121 13:35:07.466155 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:07.966140307 +0000 UTC m=+184.692555034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.479221 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-smdhk" event={"ID":"d3bc6824-00ea-42b0-99f7-ca0145d2e630","Type":"ContainerStarted","Data":"540bcba86d3ff69b46ba4a73d4c1431034ccc0e4a859dc92f1921d4d9c79b7f0"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.499109 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz" event={"ID":"07333151-cefa-4d07-aeb0-e88764760bfc","Type":"ContainerStarted","Data":"56ddb6fdf00b324a5c960b744177c63acf6522c24c33da7adc3852bad71d4eda"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.500019 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.522666 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-smdhk" podStartSLOduration=142.522648202 podStartE2EDuration="2m22.522648202s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:07.518602021 +0000 UTC m=+184.245016748" watchObservedRunningTime="2025-11-21 13:35:07.522648202 +0000 UTC m=+184.249062929" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.522822 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" podStartSLOduration=142.522818297 podStartE2EDuration="2m22.522818297s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:07.046598422 +0000 UTC m=+183.773013149" watchObservedRunningTime="2025-11-21 13:35:07.522818297 +0000 UTC m=+184.249233024" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.539732 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" event={"ID":"4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1","Type":"ContainerStarted","Data":"07f09cb620af828468f4f954bfda946a5d5cf3dec2217af15d54507ce6249a5d"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.540578 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.541959 4675 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-wv7l2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.542032 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" podUID="4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.555827 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" event={"ID":"cb0a749f-6d30-4f56-aba4-10a5339044f7","Type":"ContainerStarted","Data":"bf055471fb8192c8bc1ed067a260a256586710b3e1b37bf0f9c8c21e87b90e76"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.564005 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh" event={"ID":"0f8f39e1-45f2-4870-b7c6-03edd55ebdc1","Type":"ContainerStarted","Data":"fce8e991e51cf3a9581c07a251802ef8b6bdfe138fafe30936d644f90640df5f"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.565916 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tkjfz" event={"ID":"50aa3e37-a2f0-418a-ae62-70b46bd19ed2","Type":"ContainerStarted","Data":"bb749780a6901bf94aeaffd5ebf16a7e5a47de9aabde94115d94592ffb5e815d"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.566637 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.574570 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.575211 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-djn7k" Nov 21 13:35:07 crc kubenswrapper[4675]: E1121 13:35:07.575856 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:08.075842724 +0000 UTC m=+184.802257451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.585479 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz" podStartSLOduration=142.585462175 podStartE2EDuration="2m22.585462175s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:07.574836329 +0000 UTC m=+184.301251056" watchObservedRunningTime="2025-11-21 13:35:07.585462175 +0000 UTC m=+184.311876902" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.629433 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2" event={"ID":"a8863d93-a7c1-450c-8712-45c98a7facc6","Type":"ContainerStarted","Data":"29487b30d4cf3eb6e7ea8912200ea037215364913210d0d14c6ecab75b8f3959"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.669807 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" podStartSLOduration=142.669784146 podStartE2EDuration="2m22.669784146s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:07.628956614 +0000 UTC m=+184.355371341" watchObservedRunningTime="2025-11-21 13:35:07.669784146 +0000 UTC m=+184.396198873" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.677311 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:07 crc kubenswrapper[4675]: E1121 13:35:07.681374 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:08.181349336 +0000 UTC m=+184.907764063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.712440 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tz7pm" event={"ID":"88a7ce6c-1fed-43f5-82c6-44b4fac52dad","Type":"ContainerStarted","Data":"9cc8a8e468a5f68e117a4e2506f9c68ec270b49a8a63cea6e158000e869d06d7"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.712504 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tz7pm" event={"ID":"88a7ce6c-1fed-43f5-82c6-44b4fac52dad","Type":"ContainerStarted","Data":"c465f7039709a92f02272245bc8e97e2e473ecace5329e8674428b0210b51692"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.716701 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" podStartSLOduration=142.71666938 podStartE2EDuration="2m22.71666938s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:07.693271975 +0000 UTC m=+184.419686712" watchObservedRunningTime="2025-11-21 13:35:07.71666938 +0000 UTC m=+184.443084107" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.721319 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xs2g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:35:07 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Nov 21 13:35:07 crc kubenswrapper[4675]: [+]process-running ok Nov 21 13:35:07 crc kubenswrapper[4675]: healthz check failed Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.721376 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs2g9" podUID="da9a4b5e-bce2-48d5-9aec-e681063b19de" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.731445 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm" event={"ID":"73a0c3be-9995-4143-8a8a-13d00ff3e702","Type":"ContainerStarted","Data":"22679efb8dc787bd8fbba0bd5f9a243de31bf5d557c05507d2cb1240d70dcc38"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.731492 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm" event={"ID":"73a0c3be-9995-4143-8a8a-13d00ff3e702","Type":"ContainerStarted","Data":"9a882be8b04fffff4f04abec34746593ac3e8a673eaba9ecc5c6bcdd6ce47ca6"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.739013 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z8pfh" podStartSLOduration=142.739000259 podStartE2EDuration="2m22.739000259s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:07.737677436 +0000 UTC m=+184.464092163" watchObservedRunningTime="2025-11-21 13:35:07.739000259 +0000 UTC m=+184.465414986" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.753599 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" event={"ID":"0a531689-8a1f-425d-a966-d4cbb209b38a","Type":"ContainerStarted","Data":"40dd42d989fe4d0fbffebc8dfc44974142ff8c3221e2cb51d34e42ee74fcd83b"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.754789 4675 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-klhkk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.754824 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" podUID="0a531689-8a1f-425d-a966-d4cbb209b38a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.774589 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" event={"ID":"d0eb6f9a-ba54-4f26-aaa6-1cf5044ae472","Type":"ContainerStarted","Data":"89f0fc95088cf25e0c7ea7de22b78517ef53cae0c2cb3ad0f1fe89bb1f85144e"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.780930 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vxgr2" podStartSLOduration=142.780908119 podStartE2EDuration="2m22.780908119s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:07.780694653 +0000 UTC m=+184.507109370" watchObservedRunningTime="2025-11-21 13:35:07.780908119 +0000 UTC m=+184.507322846" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.781396 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:07 crc kubenswrapper[4675]: E1121 13:35:07.782188 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:08.282178291 +0000 UTC m=+185.008593018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.806763 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" event={"ID":"feee8664-38cf-4761-b00f-7b9551d7916a","Type":"ContainerStarted","Data":"d7c8c74fe734f6430934b593f9c6a8998371228517398934784ae252903858c6"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.806810 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" event={"ID":"feee8664-38cf-4761-b00f-7b9551d7916a","Type":"ContainerStarted","Data":"00c9eb46b4416ae981cc555f5522b295f2f20ded5cfb51e77c6b3b7efe930d4b"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.807655 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.820019 4675 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7wrmm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.820086 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" podUID="feee8664-38cf-4761-b00f-7b9551d7916a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.858452 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" event={"ID":"39e32a09-8172-443c-bd56-00a536a06de2","Type":"ContainerStarted","Data":"9573c1d363c480b9b13406b9b975bfc8522f3c8089f7e4612ef0dd36d80a5e20"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.859407 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.861477 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tz7pm" podStartSLOduration=142.861461636 podStartE2EDuration="2m22.861461636s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:07.819589147 +0000 UTC m=+184.546003874" watchObservedRunningTime="2025-11-21 13:35:07.861461636 +0000 UTC m=+184.587876363" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.861614 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" podStartSLOduration=142.86160938 podStartE2EDuration="2m22.86160938s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:07.85962319 +0000 UTC m=+184.586037917" watchObservedRunningTime="2025-11-21 13:35:07.86160938 +0000 UTC m=+184.588024107" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.864541 4675 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-r5svl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.864585 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" podUID="39e32a09-8172-443c-bd56-00a536a06de2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.885581 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:07 crc kubenswrapper[4675]: E1121 13:35:07.886521 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:08.386507013 +0000 UTC m=+185.112921740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.889133 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mm7d4" podStartSLOduration=142.889120058 podStartE2EDuration="2m22.889120058s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:07.888016341 +0000 UTC m=+184.614431058" watchObservedRunningTime="2025-11-21 13:35:07.889120058 +0000 UTC m=+184.615534785" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.902809 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" event={"ID":"0a962f3e-9813-4fe0-81cc-86faebfc6446","Type":"ContainerStarted","Data":"f198d6a9f2078f2271893b9c2ae18d9be3dccd22cb3f6aed6eb57a1ae9ccf66f"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.908463 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" event={"ID":"531294e8-2e33-4f1f-848a-b2d19d8e6102","Type":"ContainerStarted","Data":"e15c9cf15d7df97bdf3db55be7c2ede622315938610a97f67e07e50350231a79"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.908534 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" event={"ID":"531294e8-2e33-4f1f-848a-b2d19d8e6102","Type":"ContainerStarted","Data":"6f6fc85b1090d26dbce40c5b40c64826393bd0dd7888b62250e45acc48dd4b14"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.918914 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-g5t4j" event={"ID":"561c5bff-b0cf-4d0a-8035-7898ce300a38","Type":"ContainerStarted","Data":"1542acdf3886f2de9171efc5d82fdf80f405ae8d3e79ba02cb69272ba1f740bd"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.918965 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-g5t4j" event={"ID":"561c5bff-b0cf-4d0a-8035-7898ce300a38","Type":"ContainerStarted","Data":"abef3917e37823eda3940aeff8c19f9e1a3e23c3de0abcb1f5efb3a7b48abaa8"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.939185 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z8jjw" event={"ID":"07897227-3975-4a8e-88c7-af39b89133af","Type":"ContainerStarted","Data":"7715ff7ffc52e7d4814dfb5a206f0757e42b0b96295189d1242a52eab2d85f56"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.939229 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z8jjw" event={"ID":"07897227-3975-4a8e-88c7-af39b89133af","Type":"ContainerStarted","Data":"390912fc4cba5f221821c26835c73b20234b00a162b9910c2b35fdeb57d4e93f"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.959784 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" event={"ID":"99ff329a-9fc3-4e73-9c02-abad7af09113","Type":"ContainerStarted","Data":"64560d22cfd621382f5ae92418ac7aa55d7860670f56fba067f8dc317fe6ec16"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.960020 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" event={"ID":"99ff329a-9fc3-4e73-9c02-abad7af09113","Type":"ContainerStarted","Data":"238dcbde1e329d97c9f83ab5e62f79b422f29d8f7f3d0e59b13ec2cfe28f4a56"} Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.986489 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" podStartSLOduration=142.986473876 podStartE2EDuration="2m22.986473876s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:07.951290085 +0000 UTC m=+184.677704812" watchObservedRunningTime="2025-11-21 13:35:07.986473876 +0000 UTC m=+184.712888603" Nov 21 13:35:07 crc kubenswrapper[4675]: I1121 13:35:07.992349 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:07 crc kubenswrapper[4675]: E1121 13:35:07.995703 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:08.495687787 +0000 UTC m=+185.222102604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.019152 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbn9" podStartSLOduration=143.019134524 podStartE2EDuration="2m23.019134524s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:07.98504741 +0000 UTC m=+184.711462137" watchObservedRunningTime="2025-11-21 13:35:08.019134524 +0000 UTC m=+184.745549251" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.019581 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" podStartSLOduration=143.019574965 podStartE2EDuration="2m23.019574965s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:08.018677712 +0000 UTC m=+184.745092439" watchObservedRunningTime="2025-11-21 13:35:08.019574965 +0000 UTC m=+184.745989692" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.054757 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-g5t4j" podStartSLOduration=7.054741525 podStartE2EDuration="7.054741525s" podCreationTimestamp="2025-11-21 13:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:08.054304674 +0000 UTC m=+184.780719401" watchObservedRunningTime="2025-11-21 13:35:08.054741525 +0000 UTC m=+184.781156252" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.084722 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-z8jjw" podStartSLOduration=143.084703406 podStartE2EDuration="2m23.084703406s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:08.083563387 +0000 UTC m=+184.809978134" watchObservedRunningTime="2025-11-21 13:35:08.084703406 +0000 UTC m=+184.811118133" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.087714 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.088111 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.096244 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:08 crc kubenswrapper[4675]: E1121 13:35:08.096654 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:08.596637624 +0000 UTC m=+185.323052351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.104097 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.104469 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.107902 4675 patch_prober.go:28] interesting pod/apiserver-76f77b778f-6r8cf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.107951 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" podUID="cb0a749f-6d30-4f56-aba4-10a5339044f7" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.130092 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v694d" podStartSLOduration=143.130077492 podStartE2EDuration="2m23.130077492s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:08.127643301 +0000 UTC m=+184.854058048" watchObservedRunningTime="2025-11-21 13:35:08.130077492 +0000 UTC m=+184.856492219" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.198340 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:08 crc kubenswrapper[4675]: E1121 13:35:08.198661 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:08.698649929 +0000 UTC m=+185.425064656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.299049 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:08 crc kubenswrapper[4675]: E1121 13:35:08.299245 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:08.799216257 +0000 UTC m=+185.525630994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.299379 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:08 crc kubenswrapper[4675]: E1121 13:35:08.299677 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:08.799665388 +0000 UTC m=+185.526080115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.300542 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-djn7k"] Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.400498 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:08 crc kubenswrapper[4675]: E1121 13:35:08.400651 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:08.900626496 +0000 UTC m=+185.627041223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.400706 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:08 crc kubenswrapper[4675]: E1121 13:35:08.401038 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:08.901030656 +0000 UTC m=+185.627445383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.501975 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:08 crc kubenswrapper[4675]: E1121 13:35:08.502245 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:09.00221584 +0000 UTC m=+185.728630577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.603512 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:08 crc kubenswrapper[4675]: E1121 13:35:08.603973 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:09.103953657 +0000 UTC m=+185.830368444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.705461 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:08 crc kubenswrapper[4675]: E1121 13:35:08.705680 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:09.205649994 +0000 UTC m=+185.932064721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.705863 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:08 crc kubenswrapper[4675]: E1121 13:35:08.706311 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:09.20630311 +0000 UTC m=+185.932717837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.710148 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xs2g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:35:08 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Nov 21 13:35:08 crc kubenswrapper[4675]: [+]process-running ok Nov 21 13:35:08 crc kubenswrapper[4675]: healthz check failed Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.710179 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs2g9" podUID="da9a4b5e-bce2-48d5-9aec-e681063b19de" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.807115 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:08 crc kubenswrapper[4675]: E1121 13:35:08.807309 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:09.307281088 +0000 UTC m=+186.033695845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.807474 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:08 crc kubenswrapper[4675]: E1121 13:35:08.818524 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:09.318491729 +0000 UTC m=+186.044906456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.908891 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:08 crc kubenswrapper[4675]: E1121 13:35:08.909048 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:09.409017466 +0000 UTC m=+186.135432193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.909109 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:08 crc kubenswrapper[4675]: E1121 13:35:08.909441 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:09.409429196 +0000 UTC m=+186.135843923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.963512 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tkjfz" event={"ID":"50aa3e37-a2f0-418a-ae62-70b46bd19ed2","Type":"ContainerStarted","Data":"c574f5b9dd709f44e54a6dce39724ec2d65d1b27d1a1cf0b848780c82c989ea6"} Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.963655 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-tkjfz" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.964895 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tz7pm" event={"ID":"88a7ce6c-1fed-43f5-82c6-44b4fac52dad","Type":"ContainerStarted","Data":"d5dc0c96f86e0dcd18067f38d1b5f77d24ff723c14f3a86fb72e35348dfe6c22"} Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.966292 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm" event={"ID":"73a0c3be-9995-4143-8a8a-13d00ff3e702","Type":"ContainerStarted","Data":"dd3dc6f4105bf25c5ff07b1a98858bf0948142fca39be0a0e52381a31449d02b"} Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.966991 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-djn7k" event={"ID":"3034a641-e8c3-4303-bb0e-1da29de3a41b","Type":"ContainerStarted","Data":"b7e6f827946169c46dadee38bfe2f33962659611fb9e4668fbecc73170a4f625"} Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.968290 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z8jjw" event={"ID":"07897227-3975-4a8e-88c7-af39b89133af","Type":"ContainerStarted","Data":"2f56751904d859748798bcb0ff80cde4881c2ac260660c0fd0820f4ebdf96a8d"} Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.969302 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9tglw" event={"ID":"e053a129-b32f-4092-831a-db4052fad241","Type":"ContainerStarted","Data":"852dc095b0b876f425115b9c8ab91e00acfb87d32014e2e4a2c69d7a5577c77d"} Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.970034 4675 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-r5svl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.970082 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" podUID="39e32a09-8172-443c-bd56-00a536a06de2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.970126 4675 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-wv7l2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.970159 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" podUID="4ecb4fbb-5d9a-44fa-88f7-55b94838dcc1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.970359 4675 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7wrmm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.970380 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" podUID="feee8664-38cf-4761-b00f-7b9551d7916a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.971449 4675 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sg677 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.971484 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" podUID="c3091040-d378-4c3b-9f64-bf750e9b27f1" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 21 13:35:08 crc kubenswrapper[4675]: I1121 13:35:08.997631 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klhkk" Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.009722 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:09 crc kubenswrapper[4675]: E1121 13:35:09.011542 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:09.511528932 +0000 UTC m=+186.237943659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.073158 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rhnsm" podStartSLOduration=144.073137215 podStartE2EDuration="2m24.073137215s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:09.07135628 +0000 UTC m=+185.797771027" watchObservedRunningTime="2025-11-21 13:35:09.073137215 +0000 UTC m=+185.799551942" Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.073869 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-tkjfz" podStartSLOduration=8.073863403 podStartE2EDuration="8.073863403s" podCreationTimestamp="2025-11-21 13:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:09.047650557 +0000 UTC m=+185.774065284" watchObservedRunningTime="2025-11-21 13:35:09.073863403 +0000 UTC m=+185.800278130" Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.111688 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:09 crc kubenswrapper[4675]: E1121 13:35:09.111985 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:09.611972937 +0000 UTC m=+186.338387664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.212465 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:09 crc kubenswrapper[4675]: E1121 13:35:09.212750 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:09.71273619 +0000 UTC m=+186.439150917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.313796 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:09 crc kubenswrapper[4675]: E1121 13:35:09.314175 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:09.81415944 +0000 UTC m=+186.540574247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.414726 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:09 crc kubenswrapper[4675]: E1121 13:35:09.414972 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:09.914935173 +0000 UTC m=+186.641349900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.415317 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:09 crc kubenswrapper[4675]: E1121 13:35:09.415609 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:09.91560187 +0000 UTC m=+186.642016597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.517010 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:09 crc kubenswrapper[4675]: E1121 13:35:09.517525 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:10.017504501 +0000 UTC m=+186.743919228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.550411 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.618375 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:09 crc kubenswrapper[4675]: E1121 13:35:09.618768 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:10.118752857 +0000 UTC m=+186.845167584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.641883 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sg677" Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.710724 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xs2g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:35:09 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Nov 21 13:35:09 crc kubenswrapper[4675]: [+]process-running ok Nov 21 13:35:09 crc kubenswrapper[4675]: healthz check failed Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.711279 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs2g9" podUID="da9a4b5e-bce2-48d5-9aec-e681063b19de" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.719406 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:09 crc kubenswrapper[4675]: E1121 13:35:09.719763 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:10.219741835 +0000 UTC m=+186.946156562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.821225 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:09 crc kubenswrapper[4675]: E1121 13:35:09.821746 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:10.321725509 +0000 UTC m=+187.048140296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.921916 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:09 crc kubenswrapper[4675]: E1121 13:35:09.922057 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:10.422035851 +0000 UTC m=+187.148450578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.922166 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:09 crc kubenswrapper[4675]: E1121 13:35:09.922503 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:10.422492952 +0000 UTC m=+187.148907679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.978041 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-djn7k" event={"ID":"3034a641-e8c3-4303-bb0e-1da29de3a41b","Type":"ContainerStarted","Data":"81f32bba26d26c2cb48e316d70b6cb5f8cf2a310de8ac8fa6e7f5dd343e2b2fe"} Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.978116 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-djn7k" event={"ID":"3034a641-e8c3-4303-bb0e-1da29de3a41b","Type":"ContainerStarted","Data":"99f658c553850c13cd32f11968833313d3346fa62edeb54e3acb668bb0628a93"} Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.984257 4675 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-r5svl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.984305 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" podUID="39e32a09-8172-443c-bd56-00a536a06de2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.995364 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vvrpk" Nov 21 13:35:09 crc kubenswrapper[4675]: I1121 13:35:09.998792 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wv7l2" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.023505 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:10 crc kubenswrapper[4675]: E1121 13:35:10.027618 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:10.527597524 +0000 UTC m=+187.254012251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.061977 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-djn7k" podStartSLOduration=145.061955524 podStartE2EDuration="2m25.061955524s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:10.022560618 +0000 UTC m=+186.748975345" watchObservedRunningTime="2025-11-21 13:35:10.061955524 +0000 UTC m=+186.788370251" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.128182 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:10 crc kubenswrapper[4675]: E1121 13:35:10.128603 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:10.628586072 +0000 UTC m=+187.355000799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.181667 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fjgjf"] Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.182788 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.186643 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.205302 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fjgjf"] Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.238048 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:10 crc kubenswrapper[4675]: E1121 13:35:10.238240 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:10.738210306 +0000 UTC m=+187.464625043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.238670 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.238836 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f5849e-24d7-4fb4-8c2b-14c748f61f03-catalog-content\") pod \"certified-operators-fjgjf\" (UID: \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\") " pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.238961 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f5849e-24d7-4fb4-8c2b-14c748f61f03-utilities\") pod \"certified-operators-fjgjf\" (UID: \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\") " pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.239112 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99ttk\" (UniqueName: \"kubernetes.io/projected/38f5849e-24d7-4fb4-8c2b-14c748f61f03-kube-api-access-99ttk\") pod \"certified-operators-fjgjf\" (UID: \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\") " pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:35:10 crc kubenswrapper[4675]: E1121 13:35:10.239568 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:10.73955501 +0000 UTC m=+187.465969757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.324643 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7wrmm" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.340443 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.340723 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f5849e-24d7-4fb4-8c2b-14c748f61f03-catalog-content\") pod \"certified-operators-fjgjf\" (UID: \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\") " pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.340758 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f5849e-24d7-4fb4-8c2b-14c748f61f03-utilities\") pod \"certified-operators-fjgjf\" (UID: \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\") " pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.340815 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99ttk\" (UniqueName: \"kubernetes.io/projected/38f5849e-24d7-4fb4-8c2b-14c748f61f03-kube-api-access-99ttk\") pod \"certified-operators-fjgjf\" (UID: \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\") " pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:35:10 crc kubenswrapper[4675]: E1121 13:35:10.341383 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:10.841357539 +0000 UTC m=+187.567772256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.341750 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f5849e-24d7-4fb4-8c2b-14c748f61f03-catalog-content\") pod \"certified-operators-fjgjf\" (UID: \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\") " pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.341971 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f5849e-24d7-4fb4-8c2b-14c748f61f03-utilities\") pod \"certified-operators-fjgjf\" (UID: \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\") " pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.364291 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rj6fs"] Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.365519 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.366988 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.386257 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rj6fs"] Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.395022 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99ttk\" (UniqueName: \"kubernetes.io/projected/38f5849e-24d7-4fb4-8c2b-14c748f61f03-kube-api-access-99ttk\") pod \"certified-operators-fjgjf\" (UID: \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\") " pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.445454 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-utilities\") pod \"community-operators-rj6fs\" (UID: \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\") " pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.445620 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpvgv\" (UniqueName: \"kubernetes.io/projected/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-kube-api-access-zpvgv\") pod \"community-operators-rj6fs\" (UID: \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\") " pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.445644 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.445662 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-catalog-content\") pod \"community-operators-rj6fs\" (UID: \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\") " pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:35:10 crc kubenswrapper[4675]: E1121 13:35:10.445929 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:10.945916827 +0000 UTC m=+187.672331554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.500518 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.546639 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.546882 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-utilities\") pod \"community-operators-rj6fs\" (UID: \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\") " pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.546960 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpvgv\" (UniqueName: \"kubernetes.io/projected/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-kube-api-access-zpvgv\") pod \"community-operators-rj6fs\" (UID: \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\") " pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.546991 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-catalog-content\") pod \"community-operators-rj6fs\" (UID: \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\") " pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.547421 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-catalog-content\") pod \"community-operators-rj6fs\" (UID: \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\") " pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:35:10 crc kubenswrapper[4675]: E1121 13:35:10.547767 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:11.047752867 +0000 UTC m=+187.774167594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.547997 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-utilities\") pod \"community-operators-rj6fs\" (UID: \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\") " pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.597408 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-frxvh"] Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.598308 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.623857 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-frxvh"] Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.625934 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpvgv\" (UniqueName: \"kubernetes.io/projected/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-kube-api-access-zpvgv\") pod \"community-operators-rj6fs\" (UID: \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\") " pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.648839 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbmw5\" (UniqueName: \"kubernetes.io/projected/4b65642d-93cc-46c0-bba2-69047027f676-kube-api-access-qbmw5\") pod \"certified-operators-frxvh\" (UID: \"4b65642d-93cc-46c0-bba2-69047027f676\") " pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.648888 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.648995 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b65642d-93cc-46c0-bba2-69047027f676-catalog-content\") pod \"certified-operators-frxvh\" (UID: \"4b65642d-93cc-46c0-bba2-69047027f676\") " pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.649016 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b65642d-93cc-46c0-bba2-69047027f676-utilities\") pod \"certified-operators-frxvh\" (UID: \"4b65642d-93cc-46c0-bba2-69047027f676\") " pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:35:10 crc kubenswrapper[4675]: E1121 13:35:10.649318 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:11.14930855 +0000 UTC m=+187.875723277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.711678 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xs2g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:35:10 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Nov 21 13:35:10 crc kubenswrapper[4675]: [+]process-running ok Nov 21 13:35:10 crc kubenswrapper[4675]: healthz check failed Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.711728 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs2g9" podUID="da9a4b5e-bce2-48d5-9aec-e681063b19de" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.749595 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.749848 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b65642d-93cc-46c0-bba2-69047027f676-catalog-content\") pod \"certified-operators-frxvh\" (UID: \"4b65642d-93cc-46c0-bba2-69047027f676\") " pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.749885 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b65642d-93cc-46c0-bba2-69047027f676-utilities\") pod \"certified-operators-frxvh\" (UID: \"4b65642d-93cc-46c0-bba2-69047027f676\") " pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.749948 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbmw5\" (UniqueName: \"kubernetes.io/projected/4b65642d-93cc-46c0-bba2-69047027f676-kube-api-access-qbmw5\") pod \"certified-operators-frxvh\" (UID: \"4b65642d-93cc-46c0-bba2-69047027f676\") " pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.750877 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b65642d-93cc-46c0-bba2-69047027f676-catalog-content\") pod \"certified-operators-frxvh\" (UID: \"4b65642d-93cc-46c0-bba2-69047027f676\") " pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:35:10 crc kubenswrapper[4675]: E1121 13:35:10.750979 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:11.250961575 +0000 UTC m=+187.977376302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.757439 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b65642d-93cc-46c0-bba2-69047027f676-utilities\") pod \"certified-operators-frxvh\" (UID: \"4b65642d-93cc-46c0-bba2-69047027f676\") " pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.764013 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gppnb"] Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.765254 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.787538 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gppnb"] Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.787787 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.795844 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbmw5\" (UniqueName: \"kubernetes.io/projected/4b65642d-93cc-46c0-bba2-69047027f676-kube-api-access-qbmw5\") pod \"certified-operators-frxvh\" (UID: \"4b65642d-93cc-46c0-bba2-69047027f676\") " pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.851127 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.851185 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7836be34-1937-4875-849d-f5f7655e7268-utilities\") pod \"community-operators-gppnb\" (UID: \"7836be34-1937-4875-849d-f5f7655e7268\") " pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.851355 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgjv8\" (UniqueName: \"kubernetes.io/projected/7836be34-1937-4875-849d-f5f7655e7268-kube-api-access-cgjv8\") pod \"community-operators-gppnb\" (UID: \"7836be34-1937-4875-849d-f5f7655e7268\") " pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.851432 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7836be34-1937-4875-849d-f5f7655e7268-catalog-content\") pod \"community-operators-gppnb\" (UID: \"7836be34-1937-4875-849d-f5f7655e7268\") " pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:35:10 crc kubenswrapper[4675]: E1121 13:35:10.851678 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:11.351667027 +0000 UTC m=+188.078081754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.889059 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fjgjf"] Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.945422 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.952836 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:10 crc kubenswrapper[4675]: E1121 13:35:10.952999 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:11.452969563 +0000 UTC m=+188.179384300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.953319 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7836be34-1937-4875-849d-f5f7655e7268-catalog-content\") pod \"community-operators-gppnb\" (UID: \"7836be34-1937-4875-849d-f5f7655e7268\") " pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.953386 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.953460 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7836be34-1937-4875-849d-f5f7655e7268-utilities\") pod \"community-operators-gppnb\" (UID: \"7836be34-1937-4875-849d-f5f7655e7268\") " pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.953504 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgjv8\" (UniqueName: \"kubernetes.io/projected/7836be34-1937-4875-849d-f5f7655e7268-kube-api-access-cgjv8\") pod \"community-operators-gppnb\" (UID: \"7836be34-1937-4875-849d-f5f7655e7268\") " pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.953716 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7836be34-1937-4875-849d-f5f7655e7268-catalog-content\") pod \"community-operators-gppnb\" (UID: \"7836be34-1937-4875-849d-f5f7655e7268\") " pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:35:10 crc kubenswrapper[4675]: E1121 13:35:10.953773 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:11.453758173 +0000 UTC m=+188.180172900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.953927 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7836be34-1937-4875-849d-f5f7655e7268-utilities\") pod \"community-operators-gppnb\" (UID: \"7836be34-1937-4875-849d-f5f7655e7268\") " pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.972398 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgjv8\" (UniqueName: \"kubernetes.io/projected/7836be34-1937-4875-849d-f5f7655e7268-kube-api-access-cgjv8\") pod \"community-operators-gppnb\" (UID: \"7836be34-1937-4875-849d-f5f7655e7268\") " pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.987787 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9tglw" event={"ID":"e053a129-b32f-4092-831a-db4052fad241","Type":"ContainerStarted","Data":"9bd9f8ce1ae1de9a1afa25fc25e4e37bd19004a290d356bb3d73269cf2c941a2"} Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.989222 4675 generic.go:334] "Generic (PLEG): container finished" podID="531294e8-2e33-4f1f-848a-b2d19d8e6102" containerID="e15c9cf15d7df97bdf3db55be7c2ede622315938610a97f67e07e50350231a79" exitCode=0 Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.989274 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" event={"ID":"531294e8-2e33-4f1f-848a-b2d19d8e6102","Type":"ContainerDied","Data":"e15c9cf15d7df97bdf3db55be7c2ede622315938610a97f67e07e50350231a79"} Nov 21 13:35:10 crc kubenswrapper[4675]: I1121 13:35:10.991229 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjgjf" event={"ID":"38f5849e-24d7-4fb4-8c2b-14c748f61f03","Type":"ContainerStarted","Data":"880233d818e7d583c24c155fe67ef7a9c949819dfae39f5faf19d8c1ef468271"} Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.006164 4675 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.054193 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:11 crc kubenswrapper[4675]: E1121 13:35:11.055752 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:11.555734536 +0000 UTC m=+188.282149263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.081467 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rj6fs"] Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.130764 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.155593 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:11 crc kubenswrapper[4675]: E1121 13:35:11.156829 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:11.656814767 +0000 UTC m=+188.383229494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.237515 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-frxvh"] Nov 21 13:35:11 crc kubenswrapper[4675]: W1121 13:35:11.249723 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b65642d_93cc_46c0_bba2_69047027f676.slice/crio-eb41aec99529d720d9834d520a4c7dee05b221e76c1ccd0ebe02ae4f233430e4 WatchSource:0}: Error finding container eb41aec99529d720d9834d520a4c7dee05b221e76c1ccd0ebe02ae4f233430e4: Status 404 returned error can't find the container with id eb41aec99529d720d9834d520a4c7dee05b221e76c1ccd0ebe02ae4f233430e4 Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.257445 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:11 crc kubenswrapper[4675]: E1121 13:35:11.257730 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:11.757717444 +0000 UTC m=+188.484132171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.358849 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:11 crc kubenswrapper[4675]: E1121 13:35:11.359791 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:11.859773829 +0000 UTC m=+188.586188556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.415261 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gppnb"] Nov 21 13:35:11 crc kubenswrapper[4675]: W1121 13:35:11.421632 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7836be34_1937_4875_849d_f5f7655e7268.slice/crio-eeb892405e466f34278fa5d1a29df943575c73ae21a3fd6ae04b1ed5085877a9 WatchSource:0}: Error finding container eeb892405e466f34278fa5d1a29df943575c73ae21a3fd6ae04b1ed5085877a9: Status 404 returned error can't find the container with id eeb892405e466f34278fa5d1a29df943575c73ae21a3fd6ae04b1ed5085877a9 Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.459923 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:11 crc kubenswrapper[4675]: E1121 13:35:11.460108 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:11.960082111 +0000 UTC m=+188.686496848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.460252 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:11 crc kubenswrapper[4675]: E1121 13:35:11.460571 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:35:11.960560223 +0000 UTC m=+188.686974940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nd8c" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.561532 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:11 crc kubenswrapper[4675]: E1121 13:35:11.561971 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:35:12.061952461 +0000 UTC m=+188.788367188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.566087 4675 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-21T13:35:11.006203316Z","Handler":null,"Name":""} Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.572654 4675 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.572685 4675 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.614684 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.615810 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.617485 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.617786 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.626562 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.663553 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9d7ca7b-c5d0-430e-a66a-bca384879ef5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f9d7ca7b-c5d0-430e-a66a-bca384879ef5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.663616 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.663704 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9d7ca7b-c5d0-430e-a66a-bca384879ef5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f9d7ca7b-c5d0-430e-a66a-bca384879ef5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.665995 4675 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.666024 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.686539 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nd8c\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.707703 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xs2g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:35:11 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Nov 21 13:35:11 crc kubenswrapper[4675]: [+]process-running ok Nov 21 13:35:11 crc kubenswrapper[4675]: healthz check failed Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.707763 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs2g9" podUID="da9a4b5e-bce2-48d5-9aec-e681063b19de" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.764986 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.765212 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9d7ca7b-c5d0-430e-a66a-bca384879ef5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f9d7ca7b-c5d0-430e-a66a-bca384879ef5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.765285 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9d7ca7b-c5d0-430e-a66a-bca384879ef5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f9d7ca7b-c5d0-430e-a66a-bca384879ef5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.765349 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9d7ca7b-c5d0-430e-a66a-bca384879ef5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f9d7ca7b-c5d0-430e-a66a-bca384879ef5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.780438 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.796843 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9d7ca7b-c5d0-430e-a66a-bca384879ef5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f9d7ca7b-c5d0-430e-a66a-bca384879ef5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.972409 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.976907 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:35:11 crc kubenswrapper[4675]: I1121 13:35:11.981118 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.039051 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9tglw" event={"ID":"e053a129-b32f-4092-831a-db4052fad241","Type":"ContainerStarted","Data":"b4306e830d01e46f72fbe6f0524eb284efa561ffa03eba5b89f0640f298aa946"} Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.039116 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9tglw" event={"ID":"e053a129-b32f-4092-831a-db4052fad241","Type":"ContainerStarted","Data":"fc45babc858e7d023eb14aa54a81818388260508006bb4b71d8c88e090bb9f87"} Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.043929 4675 generic.go:334] "Generic (PLEG): container finished" podID="38f5849e-24d7-4fb4-8c2b-14c748f61f03" containerID="54a48833a0023ff9fc3ae91970fb51cee5f8b0e5d04b168c40916aad766cc2fd" exitCode=0 Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.044185 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjgjf" event={"ID":"38f5849e-24d7-4fb4-8c2b-14c748f61f03","Type":"ContainerDied","Data":"54a48833a0023ff9fc3ae91970fb51cee5f8b0e5d04b168c40916aad766cc2fd"} Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.048627 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.053341 4675 generic.go:334] "Generic (PLEG): container finished" podID="7836be34-1937-4875-849d-f5f7655e7268" containerID="91f0ecea10cad49c1e660636154d2ddc32c5a4b597edd330d380f70b1545f65a" exitCode=0 Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.053422 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gppnb" event={"ID":"7836be34-1937-4875-849d-f5f7655e7268","Type":"ContainerDied","Data":"91f0ecea10cad49c1e660636154d2ddc32c5a4b597edd330d380f70b1545f65a"} Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.053487 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gppnb" event={"ID":"7836be34-1937-4875-849d-f5f7655e7268","Type":"ContainerStarted","Data":"eeb892405e466f34278fa5d1a29df943575c73ae21a3fd6ae04b1ed5085877a9"} Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.064954 4675 generic.go:334] "Generic (PLEG): container finished" podID="4b65642d-93cc-46c0-bba2-69047027f676" containerID="e71077dfac040065c7a025d0bc44a1e1e6159bc301d19d1ce9b0414ad23367ff" exitCode=0 Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.065024 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frxvh" event={"ID":"4b65642d-93cc-46c0-bba2-69047027f676","Type":"ContainerDied","Data":"e71077dfac040065c7a025d0bc44a1e1e6159bc301d19d1ce9b0414ad23367ff"} Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.065059 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frxvh" event={"ID":"4b65642d-93cc-46c0-bba2-69047027f676","Type":"ContainerStarted","Data":"eb41aec99529d720d9834d520a4c7dee05b221e76c1ccd0ebe02ae4f233430e4"} Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.066254 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9tglw" podStartSLOduration=12.066231788 podStartE2EDuration="12.066231788s" podCreationTimestamp="2025-11-21 13:35:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:12.06392724 +0000 UTC m=+188.790341957" watchObservedRunningTime="2025-11-21 13:35:12.066231788 +0000 UTC m=+188.792646515" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.068048 4675 generic.go:334] "Generic (PLEG): container finished" podID="8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" containerID="dc9bca587bf032e94e7edc0e5cdf3b4c4a5ca26fe57e557c19ea47d7f8d4823e" exitCode=0 Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.068150 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj6fs" event={"ID":"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c","Type":"ContainerDied","Data":"dc9bca587bf032e94e7edc0e5cdf3b4c4a5ca26fe57e557c19ea47d7f8d4823e"} Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.068191 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj6fs" event={"ID":"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c","Type":"ContainerStarted","Data":"cb2f5288fc60c3d7b1f0a42b912533e8a85722b6c1d269eb696a6ade3e5d179d"} Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.158540 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ztth5"] Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.160595 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.168969 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.170343 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh8rt\" (UniqueName: \"kubernetes.io/projected/04c7001c-ca6f-43a9-b828-02697c5e581a-kube-api-access-zh8rt\") pod \"redhat-marketplace-ztth5\" (UID: \"04c7001c-ca6f-43a9-b828-02697c5e581a\") " pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.170393 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c7001c-ca6f-43a9-b828-02697c5e581a-catalog-content\") pod \"redhat-marketplace-ztth5\" (UID: \"04c7001c-ca6f-43a9-b828-02697c5e581a\") " pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.170568 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c7001c-ca6f-43a9-b828-02697c5e581a-utilities\") pod \"redhat-marketplace-ztth5\" (UID: \"04c7001c-ca6f-43a9-b828-02697c5e581a\") " pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.176674 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztth5"] Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.271544 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c7001c-ca6f-43a9-b828-02697c5e581a-utilities\") pod \"redhat-marketplace-ztth5\" (UID: \"04c7001c-ca6f-43a9-b828-02697c5e581a\") " pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.271625 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh8rt\" (UniqueName: \"kubernetes.io/projected/04c7001c-ca6f-43a9-b828-02697c5e581a-kube-api-access-zh8rt\") pod \"redhat-marketplace-ztth5\" (UID: \"04c7001c-ca6f-43a9-b828-02697c5e581a\") " pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.271664 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c7001c-ca6f-43a9-b828-02697c5e581a-catalog-content\") pod \"redhat-marketplace-ztth5\" (UID: \"04c7001c-ca6f-43a9-b828-02697c5e581a\") " pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.272233 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c7001c-ca6f-43a9-b828-02697c5e581a-catalog-content\") pod \"redhat-marketplace-ztth5\" (UID: \"04c7001c-ca6f-43a9-b828-02697c5e581a\") " pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.272923 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c7001c-ca6f-43a9-b828-02697c5e581a-utilities\") pod \"redhat-marketplace-ztth5\" (UID: \"04c7001c-ca6f-43a9-b828-02697c5e581a\") " pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.279387 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.310369 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh8rt\" (UniqueName: \"kubernetes.io/projected/04c7001c-ca6f-43a9-b828-02697c5e581a-kube-api-access-zh8rt\") pod \"redhat-marketplace-ztth5\" (UID: \"04c7001c-ca6f-43a9-b828-02697c5e581a\") " pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.333709 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nd8c"] Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.337017 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" Nov 21 13:35:12 crc kubenswrapper[4675]: W1121 13:35:12.344201 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod872d6e82_4322_4b06_a8e1_c3f23aea4c45.slice/crio-b395f991d10da316dc45a6f1053da3b70290f86388196ea783159bb5f6715418 WatchSource:0}: Error finding container b395f991d10da316dc45a6f1053da3b70290f86388196ea783159bb5f6715418: Status 404 returned error can't find the container with id b395f991d10da316dc45a6f1053da3b70290f86388196ea783159bb5f6715418 Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.473446 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/531294e8-2e33-4f1f-848a-b2d19d8e6102-config-volume\") pod \"531294e8-2e33-4f1f-848a-b2d19d8e6102\" (UID: \"531294e8-2e33-4f1f-848a-b2d19d8e6102\") " Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.473816 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/531294e8-2e33-4f1f-848a-b2d19d8e6102-secret-volume\") pod \"531294e8-2e33-4f1f-848a-b2d19d8e6102\" (UID: \"531294e8-2e33-4f1f-848a-b2d19d8e6102\") " Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.473888 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vntkd\" (UniqueName: \"kubernetes.io/projected/531294e8-2e33-4f1f-848a-b2d19d8e6102-kube-api-access-vntkd\") pod \"531294e8-2e33-4f1f-848a-b2d19d8e6102\" (UID: \"531294e8-2e33-4f1f-848a-b2d19d8e6102\") " Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.474415 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/531294e8-2e33-4f1f-848a-b2d19d8e6102-config-volume" (OuterVolumeSpecName: "config-volume") pod "531294e8-2e33-4f1f-848a-b2d19d8e6102" (UID: "531294e8-2e33-4f1f-848a-b2d19d8e6102"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.480307 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531294e8-2e33-4f1f-848a-b2d19d8e6102-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "531294e8-2e33-4f1f-848a-b2d19d8e6102" (UID: "531294e8-2e33-4f1f-848a-b2d19d8e6102"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.480355 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/531294e8-2e33-4f1f-848a-b2d19d8e6102-kube-api-access-vntkd" (OuterVolumeSpecName: "kube-api-access-vntkd") pod "531294e8-2e33-4f1f-848a-b2d19d8e6102" (UID: "531294e8-2e33-4f1f-848a-b2d19d8e6102"). InnerVolumeSpecName "kube-api-access-vntkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.498455 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.554954 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lhmq8"] Nov 21 13:35:12 crc kubenswrapper[4675]: E1121 13:35:12.555269 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="531294e8-2e33-4f1f-848a-b2d19d8e6102" containerName="collect-profiles" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.555339 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="531294e8-2e33-4f1f-848a-b2d19d8e6102" containerName="collect-profiles" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.555485 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="531294e8-2e33-4f1f-848a-b2d19d8e6102" containerName="collect-profiles" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.570614 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhmq8"] Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.570772 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.575790 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/531294e8-2e33-4f1f-848a-b2d19d8e6102-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.575827 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/531294e8-2e33-4f1f-848a-b2d19d8e6102-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.575841 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vntkd\" (UniqueName: \"kubernetes.io/projected/531294e8-2e33-4f1f-848a-b2d19d8e6102-kube-api-access-vntkd\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.678046 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606fd1fb-4bb3-434c-a004-07233720375a-utilities\") pod \"redhat-marketplace-lhmq8\" (UID: \"606fd1fb-4bb3-434c-a004-07233720375a\") " pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.678446 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606fd1fb-4bb3-434c-a004-07233720375a-catalog-content\") pod \"redhat-marketplace-lhmq8\" (UID: \"606fd1fb-4bb3-434c-a004-07233720375a\") " pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.678532 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6d2x\" (UniqueName: \"kubernetes.io/projected/606fd1fb-4bb3-434c-a004-07233720375a-kube-api-access-n6d2x\") pod \"redhat-marketplace-lhmq8\" (UID: \"606fd1fb-4bb3-434c-a004-07233720375a\") " pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.708865 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xs2g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:35:12 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Nov 21 13:35:12 crc kubenswrapper[4675]: [+]process-running ok Nov 21 13:35:12 crc kubenswrapper[4675]: healthz check failed Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.708929 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs2g9" podUID="da9a4b5e-bce2-48d5-9aec-e681063b19de" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.750030 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztth5"] Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.779770 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6d2x\" (UniqueName: \"kubernetes.io/projected/606fd1fb-4bb3-434c-a004-07233720375a-kube-api-access-n6d2x\") pod \"redhat-marketplace-lhmq8\" (UID: \"606fd1fb-4bb3-434c-a004-07233720375a\") " pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.779870 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606fd1fb-4bb3-434c-a004-07233720375a-utilities\") pod \"redhat-marketplace-lhmq8\" (UID: \"606fd1fb-4bb3-434c-a004-07233720375a\") " pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.779954 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606fd1fb-4bb3-434c-a004-07233720375a-catalog-content\") pod \"redhat-marketplace-lhmq8\" (UID: \"606fd1fb-4bb3-434c-a004-07233720375a\") " pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.781258 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606fd1fb-4bb3-434c-a004-07233720375a-catalog-content\") pod \"redhat-marketplace-lhmq8\" (UID: \"606fd1fb-4bb3-434c-a004-07233720375a\") " pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.781444 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606fd1fb-4bb3-434c-a004-07233720375a-utilities\") pod \"redhat-marketplace-lhmq8\" (UID: \"606fd1fb-4bb3-434c-a004-07233720375a\") " pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:35:12 crc kubenswrapper[4675]: W1121 13:35:12.782098 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04c7001c_ca6f_43a9_b828_02697c5e581a.slice/crio-7678d6aef048928c6b243701ba34b18678d40a09d93b71755fdabb902f53f3e8 WatchSource:0}: Error finding container 7678d6aef048928c6b243701ba34b18678d40a09d93b71755fdabb902f53f3e8: Status 404 returned error can't find the container with id 7678d6aef048928c6b243701ba34b18678d40a09d93b71755fdabb902f53f3e8 Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.800730 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6d2x\" (UniqueName: \"kubernetes.io/projected/606fd1fb-4bb3-434c-a004-07233720375a-kube-api-access-n6d2x\") pod \"redhat-marketplace-lhmq8\" (UID: \"606fd1fb-4bb3-434c-a004-07233720375a\") " pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.859095 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.901408 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.909566 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-ms8dm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.909620 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ms8dm" podUID="6190d999-660a-44f5-a51b-cd53647289db" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.909687 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-ms8dm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 21 13:35:12 crc kubenswrapper[4675]: I1121 13:35:12.909733 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ms8dm" podUID="6190d999-660a-44f5-a51b-cd53647289db" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.051832 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.052963 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.057904 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.060688 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.062518 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.087991 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.092132 4675 generic.go:334] "Generic (PLEG): container finished" podID="04c7001c-ca6f-43a9-b828-02697c5e581a" containerID="a50049c466181bc9ff925b8d704917ceb11fd73cbd4b0000953506d1bdc1495c" exitCode=0 Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.092362 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztth5" event={"ID":"04c7001c-ca6f-43a9-b828-02697c5e581a","Type":"ContainerDied","Data":"a50049c466181bc9ff925b8d704917ceb11fd73cbd4b0000953506d1bdc1495c"} Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.092402 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztth5" event={"ID":"04c7001c-ca6f-43a9-b828-02697c5e581a","Type":"ContainerStarted","Data":"7678d6aef048928c6b243701ba34b18678d40a09d93b71755fdabb902f53f3e8"} Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.096431 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" event={"ID":"531294e8-2e33-4f1f-848a-b2d19d8e6102","Type":"ContainerDied","Data":"6f6fc85b1090d26dbce40c5b40c64826393bd0dd7888b62250e45acc48dd4b14"} Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.096471 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f6fc85b1090d26dbce40c5b40c64826393bd0dd7888b62250e45acc48dd4b14" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.096543 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.098478 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-6r8cf" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.119981 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f9d7ca7b-c5d0-430e-a66a-bca384879ef5","Type":"ContainerStarted","Data":"07314fbc9bafe5b7acefdb82dd53bcd2b1d40c1cb31e59a3513a82ec17b66a0f"} Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.120112 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f9d7ca7b-c5d0-430e-a66a-bca384879ef5","Type":"ContainerStarted","Data":"1f807b081061315f54a8e36a6ad7c9a20c7deddc224f064575e78092092b1377"} Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.128602 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" event={"ID":"872d6e82-4322-4b06-a8e1-c3f23aea4c45","Type":"ContainerStarted","Data":"0de2b69d603c6b183b182b4277fc93a97aa05ef11bce861979b7a3bdf3bbdd7a"} Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.128639 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" event={"ID":"872d6e82-4322-4b06-a8e1-c3f23aea4c45","Type":"ContainerStarted","Data":"b395f991d10da316dc45a6f1053da3b70290f86388196ea783159bb5f6715418"} Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.128657 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.160428 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.160460 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.170543 4675 patch_prober.go:28] interesting pod/console-f9d7485db-cz299 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.170614 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cz299" podUID="0d4777cf-9799-450d-a46f-5d5bedeaa706" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.190836 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb9748fc-dd1d-4029-b8ec-706d77ff7d0f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"eb9748fc-dd1d-4029-b8ec-706d77ff7d0f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.191055 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb9748fc-dd1d-4029-b8ec-706d77ff7d0f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"eb9748fc-dd1d-4029-b8ec-706d77ff7d0f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.213830 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhmq8"] Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.229587 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" podStartSLOduration=148.229568537 podStartE2EDuration="2m28.229568537s" podCreationTimestamp="2025-11-21 13:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:13.225543266 +0000 UTC m=+189.951957993" watchObservedRunningTime="2025-11-21 13:35:13.229568537 +0000 UTC m=+189.955983264" Nov 21 13:35:13 crc kubenswrapper[4675]: W1121 13:35:13.268651 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod606fd1fb_4bb3_434c_a004_07233720375a.slice/crio-975286d8b9bbbfb7ee3aac0edb86092e50ce454697a6e3097899ddd4a9eaa87a WatchSource:0}: Error finding container 975286d8b9bbbfb7ee3aac0edb86092e50ce454697a6e3097899ddd4a9eaa87a: Status 404 returned error can't find the container with id 975286d8b9bbbfb7ee3aac0edb86092e50ce454697a6e3097899ddd4a9eaa87a Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.293140 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb9748fc-dd1d-4029-b8ec-706d77ff7d0f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"eb9748fc-dd1d-4029-b8ec-706d77ff7d0f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.293321 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb9748fc-dd1d-4029-b8ec-706d77ff7d0f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"eb9748fc-dd1d-4029-b8ec-706d77ff7d0f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.294672 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb9748fc-dd1d-4029-b8ec-706d77ff7d0f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"eb9748fc-dd1d-4029-b8ec-706d77ff7d0f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.331218 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb9748fc-dd1d-4029-b8ec-706d77ff7d0f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"eb9748fc-dd1d-4029-b8ec-706d77ff7d0f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.370985 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.370965477 podStartE2EDuration="2.370965477s" podCreationTimestamp="2025-11-21 13:35:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:13.256475891 +0000 UTC m=+189.982890628" watchObservedRunningTime="2025-11-21 13:35:13.370965477 +0000 UTC m=+190.097380204" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.374106 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rjkqd"] Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.375082 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.379504 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.388119 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.389939 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjkqd"] Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.500916 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69663c02-7b8c-478d-8975-79fae4dbadea-catalog-content\") pod \"redhat-operators-rjkqd\" (UID: \"69663c02-7b8c-478d-8975-79fae4dbadea\") " pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.500996 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69663c02-7b8c-478d-8975-79fae4dbadea-utilities\") pod \"redhat-operators-rjkqd\" (UID: \"69663c02-7b8c-478d-8975-79fae4dbadea\") " pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.501018 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpbnn\" (UniqueName: \"kubernetes.io/projected/69663c02-7b8c-478d-8975-79fae4dbadea-kube-api-access-zpbnn\") pod \"redhat-operators-rjkqd\" (UID: \"69663c02-7b8c-478d-8975-79fae4dbadea\") " pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.603710 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69663c02-7b8c-478d-8975-79fae4dbadea-utilities\") pod \"redhat-operators-rjkqd\" (UID: \"69663c02-7b8c-478d-8975-79fae4dbadea\") " pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.603220 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69663c02-7b8c-478d-8975-79fae4dbadea-utilities\") pod \"redhat-operators-rjkqd\" (UID: \"69663c02-7b8c-478d-8975-79fae4dbadea\") " pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.604180 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpbnn\" (UniqueName: \"kubernetes.io/projected/69663c02-7b8c-478d-8975-79fae4dbadea-kube-api-access-zpbnn\") pod \"redhat-operators-rjkqd\" (UID: \"69663c02-7b8c-478d-8975-79fae4dbadea\") " pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.604291 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69663c02-7b8c-478d-8975-79fae4dbadea-catalog-content\") pod \"redhat-operators-rjkqd\" (UID: \"69663c02-7b8c-478d-8975-79fae4dbadea\") " pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.604641 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69663c02-7b8c-478d-8975-79fae4dbadea-catalog-content\") pod \"redhat-operators-rjkqd\" (UID: \"69663c02-7b8c-478d-8975-79fae4dbadea\") " pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.626476 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpbnn\" (UniqueName: \"kubernetes.io/projected/69663c02-7b8c-478d-8975-79fae4dbadea-kube-api-access-zpbnn\") pod \"redhat-operators-rjkqd\" (UID: \"69663c02-7b8c-478d-8975-79fae4dbadea\") " pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.698642 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.704871 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.709645 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xs2g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:35:13 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Nov 21 13:35:13 crc kubenswrapper[4675]: [+]process-running ok Nov 21 13:35:13 crc kubenswrapper[4675]: healthz check failed Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.709693 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs2g9" podUID="da9a4b5e-bce2-48d5-9aec-e681063b19de" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.735483 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.771032 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-25kf4"] Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.780161 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.780334 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-25kf4"] Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.908980 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9nzg\" (UniqueName: \"kubernetes.io/projected/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-kube-api-access-h9nzg\") pod \"redhat-operators-25kf4\" (UID: \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\") " pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.909028 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-utilities\") pod \"redhat-operators-25kf4\" (UID: \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\") " pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:35:13 crc kubenswrapper[4675]: I1121 13:35:13.909210 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-catalog-content\") pod \"redhat-operators-25kf4\" (UID: \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\") " pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.010815 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9nzg\" (UniqueName: \"kubernetes.io/projected/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-kube-api-access-h9nzg\") pod \"redhat-operators-25kf4\" (UID: \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\") " pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.011133 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-utilities\") pod \"redhat-operators-25kf4\" (UID: \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\") " pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.011194 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-catalog-content\") pod \"redhat-operators-25kf4\" (UID: \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\") " pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.011651 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-catalog-content\") pod \"redhat-operators-25kf4\" (UID: \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\") " pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.011883 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-utilities\") pod \"redhat-operators-25kf4\" (UID: \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\") " pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.035089 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9nzg\" (UniqueName: \"kubernetes.io/projected/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-kube-api-access-h9nzg\") pod \"redhat-operators-25kf4\" (UID: \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\") " pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.105175 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.159221 4675 generic.go:334] "Generic (PLEG): container finished" podID="f9d7ca7b-c5d0-430e-a66a-bca384879ef5" containerID="07314fbc9bafe5b7acefdb82dd53bcd2b1d40c1cb31e59a3513a82ec17b66a0f" exitCode=0 Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.159705 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f9d7ca7b-c5d0-430e-a66a-bca384879ef5","Type":"ContainerDied","Data":"07314fbc9bafe5b7acefdb82dd53bcd2b1d40c1cb31e59a3513a82ec17b66a0f"} Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.184996 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"eb9748fc-dd1d-4029-b8ec-706d77ff7d0f","Type":"ContainerStarted","Data":"c0ed90fdc331c02b7d3cb1ee1eb689db258039122cae66bafe6a576a5d845433"} Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.188779 4675 generic.go:334] "Generic (PLEG): container finished" podID="606fd1fb-4bb3-434c-a004-07233720375a" containerID="15a8a3082a296bf218ab6d19a305d95ffd196911cdda02c9f27889e8ef0ab269" exitCode=0 Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.190570 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhmq8" event={"ID":"606fd1fb-4bb3-434c-a004-07233720375a","Type":"ContainerDied","Data":"15a8a3082a296bf218ab6d19a305d95ffd196911cdda02c9f27889e8ef0ab269"} Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.190634 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhmq8" event={"ID":"606fd1fb-4bb3-434c-a004-07233720375a","Type":"ContainerStarted","Data":"975286d8b9bbbfb7ee3aac0edb86092e50ce454697a6e3097899ddd4a9eaa87a"} Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.192471 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjkqd"] Nov 21 13:35:14 crc kubenswrapper[4675]: W1121 13:35:14.208735 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69663c02_7b8c_478d_8975_79fae4dbadea.slice/crio-eb142fb7753dda93d2d591b46ba843b932c84d0de9fb81c22db52a0a850369b4 WatchSource:0}: Error finding container eb142fb7753dda93d2d591b46ba843b932c84d0de9fb81c22db52a0a850369b4: Status 404 returned error can't find the container with id eb142fb7753dda93d2d591b46ba843b932c84d0de9fb81c22db52a0a850369b4 Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.373758 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.511325 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-25kf4"] Nov 21 13:35:14 crc kubenswrapper[4675]: W1121 13:35:14.527688 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod864e7914_4c9e_4b74_aa0c_c363b3b9a01f.slice/crio-9ee3e3994aab037beeb9a211fef96f3f0b0c6b80d2232af7c221075d876cceff WatchSource:0}: Error finding container 9ee3e3994aab037beeb9a211fef96f3f0b0c6b80d2232af7c221075d876cceff: Status 404 returned error can't find the container with id 9ee3e3994aab037beeb9a211fef96f3f0b0c6b80d2232af7c221075d876cceff Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.710308 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xs2g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:35:14 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Nov 21 13:35:14 crc kubenswrapper[4675]: [+]process-running ok Nov 21 13:35:14 crc kubenswrapper[4675]: healthz check failed Nov 21 13:35:14 crc kubenswrapper[4675]: I1121 13:35:14.710369 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs2g9" podUID="da9a4b5e-bce2-48d5-9aec-e681063b19de" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:35:15 crc kubenswrapper[4675]: I1121 13:35:15.208693 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjkqd" event={"ID":"69663c02-7b8c-478d-8975-79fae4dbadea","Type":"ContainerStarted","Data":"e565b363e3c31356cc729b3a7983a4aa15b836cec41ff444276253cc328085fa"} Nov 21 13:35:15 crc kubenswrapper[4675]: I1121 13:35:15.209148 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjkqd" event={"ID":"69663c02-7b8c-478d-8975-79fae4dbadea","Type":"ContainerStarted","Data":"eb142fb7753dda93d2d591b46ba843b932c84d0de9fb81c22db52a0a850369b4"} Nov 21 13:35:15 crc kubenswrapper[4675]: I1121 13:35:15.214728 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25kf4" event={"ID":"864e7914-4c9e-4b74-aa0c-c363b3b9a01f","Type":"ContainerStarted","Data":"9ee3e3994aab037beeb9a211fef96f3f0b0c6b80d2232af7c221075d876cceff"} Nov 21 13:35:15 crc kubenswrapper[4675]: I1121 13:35:15.488090 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:35:15 crc kubenswrapper[4675]: I1121 13:35:15.540424 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9d7ca7b-c5d0-430e-a66a-bca384879ef5-kubelet-dir\") pod \"f9d7ca7b-c5d0-430e-a66a-bca384879ef5\" (UID: \"f9d7ca7b-c5d0-430e-a66a-bca384879ef5\") " Nov 21 13:35:15 crc kubenswrapper[4675]: I1121 13:35:15.540932 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9d7ca7b-c5d0-430e-a66a-bca384879ef5-kube-api-access\") pod \"f9d7ca7b-c5d0-430e-a66a-bca384879ef5\" (UID: \"f9d7ca7b-c5d0-430e-a66a-bca384879ef5\") " Nov 21 13:35:15 crc kubenswrapper[4675]: I1121 13:35:15.541047 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9d7ca7b-c5d0-430e-a66a-bca384879ef5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f9d7ca7b-c5d0-430e-a66a-bca384879ef5" (UID: "f9d7ca7b-c5d0-430e-a66a-bca384879ef5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:35:15 crc kubenswrapper[4675]: I1121 13:35:15.542290 4675 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9d7ca7b-c5d0-430e-a66a-bca384879ef5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:15 crc kubenswrapper[4675]: I1121 13:35:15.561888 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d7ca7b-c5d0-430e-a66a-bca384879ef5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f9d7ca7b-c5d0-430e-a66a-bca384879ef5" (UID: "f9d7ca7b-c5d0-430e-a66a-bca384879ef5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:35:15 crc kubenswrapper[4675]: I1121 13:35:15.643820 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9d7ca7b-c5d0-430e-a66a-bca384879ef5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:15 crc kubenswrapper[4675]: I1121 13:35:15.709252 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xs2g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:35:15 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Nov 21 13:35:15 crc kubenswrapper[4675]: [+]process-running ok Nov 21 13:35:15 crc kubenswrapper[4675]: healthz check failed Nov 21 13:35:15 crc kubenswrapper[4675]: I1121 13:35:15.709310 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs2g9" podUID="da9a4b5e-bce2-48d5-9aec-e681063b19de" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:35:16 crc kubenswrapper[4675]: I1121 13:35:16.137512 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:35:16 crc kubenswrapper[4675]: I1121 13:35:16.137570 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:35:16 crc kubenswrapper[4675]: I1121 13:35:16.208493 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:35:16 crc kubenswrapper[4675]: I1121 13:35:16.229431 4675 generic.go:334] "Generic (PLEG): container finished" podID="864e7914-4c9e-4b74-aa0c-c363b3b9a01f" containerID="2c49ec50d79464c86e2e03bc203785a31d7621daa0c2455afd3e89c11d32f9fd" exitCode=0 Nov 21 13:35:16 crc kubenswrapper[4675]: I1121 13:35:16.229497 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25kf4" event={"ID":"864e7914-4c9e-4b74-aa0c-c363b3b9a01f","Type":"ContainerDied","Data":"2c49ec50d79464c86e2e03bc203785a31d7621daa0c2455afd3e89c11d32f9fd"} Nov 21 13:35:16 crc kubenswrapper[4675]: I1121 13:35:16.250262 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f9d7ca7b-c5d0-430e-a66a-bca384879ef5","Type":"ContainerDied","Data":"1f807b081061315f54a8e36a6ad7c9a20c7deddc224f064575e78092092b1377"} Nov 21 13:35:16 crc kubenswrapper[4675]: I1121 13:35:16.250301 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f807b081061315f54a8e36a6ad7c9a20c7deddc224f064575e78092092b1377" Nov 21 13:35:16 crc kubenswrapper[4675]: I1121 13:35:16.250367 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:35:16 crc kubenswrapper[4675]: I1121 13:35:16.272750 4675 generic.go:334] "Generic (PLEG): container finished" podID="69663c02-7b8c-478d-8975-79fae4dbadea" containerID="e565b363e3c31356cc729b3a7983a4aa15b836cec41ff444276253cc328085fa" exitCode=0 Nov 21 13:35:16 crc kubenswrapper[4675]: I1121 13:35:16.272883 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjkqd" event={"ID":"69663c02-7b8c-478d-8975-79fae4dbadea","Type":"ContainerDied","Data":"e565b363e3c31356cc729b3a7983a4aa15b836cec41ff444276253cc328085fa"} Nov 21 13:35:16 crc kubenswrapper[4675]: I1121 13:35:16.295034 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"eb9748fc-dd1d-4029-b8ec-706d77ff7d0f","Type":"ContainerStarted","Data":"bd9ee94ee5f09040154ca537224fac009e11a074bd980d2ddee22551b60eeaae"} Nov 21 13:35:16 crc kubenswrapper[4675]: I1121 13:35:16.322780 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.322757746 podStartE2EDuration="3.322757746s" podCreationTimestamp="2025-11-21 13:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:16.32011355 +0000 UTC m=+193.046528277" watchObservedRunningTime="2025-11-21 13:35:16.322757746 +0000 UTC m=+193.049172463" Nov 21 13:35:16 crc kubenswrapper[4675]: I1121 13:35:16.722269 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xs2g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:35:16 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Nov 21 13:35:16 crc kubenswrapper[4675]: [+]process-running ok Nov 21 13:35:16 crc kubenswrapper[4675]: healthz check failed Nov 21 13:35:16 crc kubenswrapper[4675]: I1121 13:35:16.722595 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs2g9" podUID="da9a4b5e-bce2-48d5-9aec-e681063b19de" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:35:17 crc kubenswrapper[4675]: I1121 13:35:17.307411 4675 generic.go:334] "Generic (PLEG): container finished" podID="eb9748fc-dd1d-4029-b8ec-706d77ff7d0f" containerID="bd9ee94ee5f09040154ca537224fac009e11a074bd980d2ddee22551b60eeaae" exitCode=0 Nov 21 13:35:17 crc kubenswrapper[4675]: I1121 13:35:17.309358 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"eb9748fc-dd1d-4029-b8ec-706d77ff7d0f","Type":"ContainerDied","Data":"bd9ee94ee5f09040154ca537224fac009e11a074bd980d2ddee22551b60eeaae"} Nov 21 13:35:17 crc kubenswrapper[4675]: I1121 13:35:17.718550 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xs2g9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:35:17 crc kubenswrapper[4675]: [+]has-synced ok Nov 21 13:35:17 crc kubenswrapper[4675]: [+]process-running ok Nov 21 13:35:17 crc kubenswrapper[4675]: healthz check failed Nov 21 13:35:17 crc kubenswrapper[4675]: I1121 13:35:17.718613 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs2g9" podUID="da9a4b5e-bce2-48d5-9aec-e681063b19de" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:35:18 crc kubenswrapper[4675]: I1121 13:35:18.708718 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:18 crc kubenswrapper[4675]: I1121 13:35:18.712727 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xs2g9" Nov 21 13:35:18 crc kubenswrapper[4675]: I1121 13:35:18.716397 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:35:18 crc kubenswrapper[4675]: I1121 13:35:18.801445 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb9748fc-dd1d-4029-b8ec-706d77ff7d0f-kubelet-dir\") pod \"eb9748fc-dd1d-4029-b8ec-706d77ff7d0f\" (UID: \"eb9748fc-dd1d-4029-b8ec-706d77ff7d0f\") " Nov 21 13:35:18 crc kubenswrapper[4675]: I1121 13:35:18.801523 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb9748fc-dd1d-4029-b8ec-706d77ff7d0f-kube-api-access\") pod \"eb9748fc-dd1d-4029-b8ec-706d77ff7d0f\" (UID: \"eb9748fc-dd1d-4029-b8ec-706d77ff7d0f\") " Nov 21 13:35:18 crc kubenswrapper[4675]: I1121 13:35:18.802866 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb9748fc-dd1d-4029-b8ec-706d77ff7d0f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eb9748fc-dd1d-4029-b8ec-706d77ff7d0f" (UID: "eb9748fc-dd1d-4029-b8ec-706d77ff7d0f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:35:18 crc kubenswrapper[4675]: I1121 13:35:18.830349 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9748fc-dd1d-4029-b8ec-706d77ff7d0f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eb9748fc-dd1d-4029-b8ec-706d77ff7d0f" (UID: "eb9748fc-dd1d-4029-b8ec-706d77ff7d0f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:35:18 crc kubenswrapper[4675]: I1121 13:35:18.903257 4675 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb9748fc-dd1d-4029-b8ec-706d77ff7d0f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:18 crc kubenswrapper[4675]: I1121 13:35:18.903298 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb9748fc-dd1d-4029-b8ec-706d77ff7d0f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:19 crc kubenswrapper[4675]: I1121 13:35:19.326556 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:35:19 crc kubenswrapper[4675]: I1121 13:35:19.326558 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"eb9748fc-dd1d-4029-b8ec-706d77ff7d0f","Type":"ContainerDied","Data":"c0ed90fdc331c02b7d3cb1ee1eb689db258039122cae66bafe6a576a5d845433"} Nov 21 13:35:19 crc kubenswrapper[4675]: I1121 13:35:19.326605 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0ed90fdc331c02b7d3cb1ee1eb689db258039122cae66bafe6a576a5d845433" Nov 21 13:35:19 crc kubenswrapper[4675]: I1121 13:35:19.462663 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-tkjfz" Nov 21 13:35:22 crc kubenswrapper[4675]: I1121 13:35:22.909746 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-ms8dm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 21 13:35:22 crc kubenswrapper[4675]: I1121 13:35:22.910354 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ms8dm" podUID="6190d999-660a-44f5-a51b-cd53647289db" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 21 13:35:22 crc kubenswrapper[4675]: I1121 13:35:22.909803 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-ms8dm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 21 13:35:22 crc kubenswrapper[4675]: I1121 13:35:22.910428 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ms8dm" podUID="6190d999-660a-44f5-a51b-cd53647289db" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 21 13:35:23 crc kubenswrapper[4675]: I1121 13:35:23.176848 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:23 crc kubenswrapper[4675]: I1121 13:35:23.181086 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:35:31 crc kubenswrapper[4675]: I1121 13:35:31.989587 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:35:32 crc kubenswrapper[4675]: I1121 13:35:32.917608 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-ms8dm" Nov 21 13:35:44 crc kubenswrapper[4675]: I1121 13:35:44.052655 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmpjz" Nov 21 13:35:46 crc kubenswrapper[4675]: I1121 13:35:46.137013 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:35:46 crc kubenswrapper[4675]: I1121 13:35:46.137871 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:35:46 crc kubenswrapper[4675]: I1121 13:35:46.138123 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:35:46 crc kubenswrapper[4675]: I1121 13:35:46.139102 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:35:46 crc kubenswrapper[4675]: I1121 13:35:46.139383 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434" gracePeriod=600 Nov 21 13:35:48 crc kubenswrapper[4675]: I1121 13:35:48.479050 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434" exitCode=0 Nov 21 13:35:48 crc kubenswrapper[4675]: I1121 13:35:48.479211 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434"} Nov 21 13:35:58 crc kubenswrapper[4675]: E1121 13:35:58.336401 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 21 13:35:58 crc kubenswrapper[4675]: E1121 13:35:58.337033 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpbnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rjkqd_openshift-marketplace(69663c02-7b8c-478d-8975-79fae4dbadea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 13:35:58 crc kubenswrapper[4675]: E1121 13:35:58.338255 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-rjkqd" podUID="69663c02-7b8c-478d-8975-79fae4dbadea" Nov 21 13:36:04 crc kubenswrapper[4675]: E1121 13:36:04.956626 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 21 13:36:04 crc kubenswrapper[4675]: E1121 13:36:04.957352 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6d2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lhmq8_openshift-marketplace(606fd1fb-4bb3-434c-a004-07233720375a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 13:36:04 crc kubenswrapper[4675]: E1121 13:36:04.958711 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lhmq8" podUID="606fd1fb-4bb3-434c-a004-07233720375a" Nov 21 13:36:10 crc kubenswrapper[4675]: E1121 13:36:10.836561 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lhmq8" podUID="606fd1fb-4bb3-434c-a004-07233720375a" Nov 21 13:36:10 crc kubenswrapper[4675]: E1121 13:36:10.888056 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 21 13:36:10 crc kubenswrapper[4675]: E1121 13:36:10.888233 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9nzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-25kf4_openshift-marketplace(864e7914-4c9e-4b74-aa0c-c363b3b9a01f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 13:36:10 crc kubenswrapper[4675]: E1121 13:36:10.889463 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-25kf4" podUID="864e7914-4c9e-4b74-aa0c-c363b3b9a01f" Nov 21 13:36:11 crc kubenswrapper[4675]: E1121 13:36:11.360570 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 21 13:36:11 crc kubenswrapper[4675]: E1121 13:36:11.360967 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zh8rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ztth5_openshift-marketplace(04c7001c-ca6f-43a9-b828-02697c5e581a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 13:36:11 crc kubenswrapper[4675]: E1121 13:36:11.363129 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ztth5" podUID="04c7001c-ca6f-43a9-b828-02697c5e581a" Nov 21 13:36:12 crc kubenswrapper[4675]: E1121 13:36:12.480857 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ztth5" podUID="04c7001c-ca6f-43a9-b828-02697c5e581a" Nov 21 13:36:12 crc kubenswrapper[4675]: E1121 13:36:12.582311 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 21 13:36:12 crc kubenswrapper[4675]: E1121 13:36:12.582738 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgjv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gppnb_openshift-marketplace(7836be34-1937-4875-849d-f5f7655e7268): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 13:36:12 crc kubenswrapper[4675]: E1121 13:36:12.583958 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gppnb" podUID="7836be34-1937-4875-849d-f5f7655e7268" Nov 21 13:36:12 crc kubenswrapper[4675]: E1121 13:36:12.611821 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gppnb" podUID="7836be34-1937-4875-849d-f5f7655e7268" Nov 21 13:36:12 crc kubenswrapper[4675]: E1121 13:36:12.740976 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 21 13:36:12 crc kubenswrapper[4675]: E1121 13:36:12.741144 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpvgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rj6fs_openshift-marketplace(8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 13:36:12 crc kubenswrapper[4675]: E1121 13:36:12.742315 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rj6fs" podUID="8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" Nov 21 13:36:13 crc kubenswrapper[4675]: I1121 13:36:13.616232 4675 generic.go:334] "Generic (PLEG): container finished" podID="38f5849e-24d7-4fb4-8c2b-14c748f61f03" containerID="7bf021d5b6c79287d9684e2424ba0619342bbc5bc7ac90a4e6d53d1bd36acd45" exitCode=0 Nov 21 13:36:13 crc kubenswrapper[4675]: I1121 13:36:13.616851 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjgjf" event={"ID":"38f5849e-24d7-4fb4-8c2b-14c748f61f03","Type":"ContainerDied","Data":"7bf021d5b6c79287d9684e2424ba0619342bbc5bc7ac90a4e6d53d1bd36acd45"} Nov 21 13:36:13 crc kubenswrapper[4675]: I1121 13:36:13.623118 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"856f537a5bd9ff3fb685a8ef1888397fd35d05ca4e4e1461b5e1c59414d0ee63"} Nov 21 13:36:13 crc kubenswrapper[4675]: I1121 13:36:13.625675 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjkqd" event={"ID":"69663c02-7b8c-478d-8975-79fae4dbadea","Type":"ContainerStarted","Data":"31ab96fc4c703590459e3ce01ded75259259fd87079bf7b146248c9bddd1af78"} Nov 21 13:36:13 crc kubenswrapper[4675]: I1121 13:36:13.628275 4675 generic.go:334] "Generic (PLEG): container finished" podID="4b65642d-93cc-46c0-bba2-69047027f676" containerID="41a3fe002150d58c927efa4554879be6ae8e5e1b42d1718634599b37677dcb26" exitCode=0 Nov 21 13:36:13 crc kubenswrapper[4675]: I1121 13:36:13.629057 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frxvh" event={"ID":"4b65642d-93cc-46c0-bba2-69047027f676","Type":"ContainerDied","Data":"41a3fe002150d58c927efa4554879be6ae8e5e1b42d1718634599b37677dcb26"} Nov 21 13:36:13 crc kubenswrapper[4675]: E1121 13:36:13.629683 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rj6fs" podUID="8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" Nov 21 13:36:14 crc kubenswrapper[4675]: I1121 13:36:14.634540 4675 generic.go:334] "Generic (PLEG): container finished" podID="69663c02-7b8c-478d-8975-79fae4dbadea" containerID="31ab96fc4c703590459e3ce01ded75259259fd87079bf7b146248c9bddd1af78" exitCode=0 Nov 21 13:36:14 crc kubenswrapper[4675]: I1121 13:36:14.634595 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjkqd" event={"ID":"69663c02-7b8c-478d-8975-79fae4dbadea","Type":"ContainerDied","Data":"31ab96fc4c703590459e3ce01ded75259259fd87079bf7b146248c9bddd1af78"} Nov 21 13:36:15 crc kubenswrapper[4675]: I1121 13:36:15.643562 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frxvh" event={"ID":"4b65642d-93cc-46c0-bba2-69047027f676","Type":"ContainerStarted","Data":"c22574d5fcbfedaeb47c6ab9b05187785294e4b7b4a55dee74cbf9412d5eb8a0"} Nov 21 13:36:16 crc kubenswrapper[4675]: I1121 13:36:16.651461 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjkqd" event={"ID":"69663c02-7b8c-478d-8975-79fae4dbadea","Type":"ContainerStarted","Data":"a0cbb60ba6223e1220ea7e4035058e5c622690a46b4289bfbd5ab306e9d81ad5"} Nov 21 13:36:16 crc kubenswrapper[4675]: I1121 13:36:16.654704 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjgjf" event={"ID":"38f5849e-24d7-4fb4-8c2b-14c748f61f03","Type":"ContainerStarted","Data":"416035169cbc201d0c7a7193957000cf7ea61e2108093ad3fb246479d1bc69ab"} Nov 21 13:36:16 crc kubenswrapper[4675]: I1121 13:36:16.687952 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-frxvh" podStartSLOduration=3.954733312 podStartE2EDuration="1m6.687936938s" podCreationTimestamp="2025-11-21 13:35:10 +0000 UTC" firstStartedPulling="2025-11-21 13:35:12.06751954 +0000 UTC m=+188.793934267" lastFinishedPulling="2025-11-21 13:36:14.800723156 +0000 UTC m=+251.527137893" observedRunningTime="2025-11-21 13:36:15.681370205 +0000 UTC m=+252.407784972" watchObservedRunningTime="2025-11-21 13:36:16.687936938 +0000 UTC m=+253.414351665" Nov 21 13:36:16 crc kubenswrapper[4675]: I1121 13:36:16.688580 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rjkqd" podStartSLOduration=3.670739213 podStartE2EDuration="1m3.688576685s" podCreationTimestamp="2025-11-21 13:35:13 +0000 UTC" firstStartedPulling="2025-11-21 13:35:16.274561279 +0000 UTC m=+193.000976006" lastFinishedPulling="2025-11-21 13:36:16.292398741 +0000 UTC m=+253.018813478" observedRunningTime="2025-11-21 13:36:16.687592129 +0000 UTC m=+253.414006866" watchObservedRunningTime="2025-11-21 13:36:16.688576685 +0000 UTC m=+253.414991412" Nov 21 13:36:16 crc kubenswrapper[4675]: I1121 13:36:16.716144 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fjgjf" podStartSLOduration=2.823617973 podStartE2EDuration="1m6.716125619s" podCreationTimestamp="2025-11-21 13:35:10 +0000 UTC" firstStartedPulling="2025-11-21 13:35:12.048259668 +0000 UTC m=+188.774674405" lastFinishedPulling="2025-11-21 13:36:15.940767314 +0000 UTC m=+252.667182051" observedRunningTime="2025-11-21 13:36:16.714695871 +0000 UTC m=+253.441110608" watchObservedRunningTime="2025-11-21 13:36:16.716125619 +0000 UTC m=+253.442540346" Nov 21 13:36:20 crc kubenswrapper[4675]: I1121 13:36:20.502179 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:36:20 crc kubenswrapper[4675]: I1121 13:36:20.502684 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:36:20 crc kubenswrapper[4675]: I1121 13:36:20.689900 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:36:20 crc kubenswrapper[4675]: I1121 13:36:20.946255 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:36:20 crc kubenswrapper[4675]: I1121 13:36:20.946518 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:36:20 crc kubenswrapper[4675]: I1121 13:36:20.985206 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:36:21 crc kubenswrapper[4675]: I1121 13:36:21.729271 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:36:23 crc kubenswrapper[4675]: I1121 13:36:23.622290 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-frxvh"] Nov 21 13:36:23 crc kubenswrapper[4675]: I1121 13:36:23.699850 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:36:23 crc kubenswrapper[4675]: I1121 13:36:23.700552 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:36:23 crc kubenswrapper[4675]: I1121 13:36:23.740800 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:36:24 crc kubenswrapper[4675]: I1121 13:36:24.699455 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25kf4" event={"ID":"864e7914-4c9e-4b74-aa0c-c363b3b9a01f","Type":"ContainerStarted","Data":"643f0037aabd3a3b1a307cbd92c77ba91336c1d1eb26c7ed94f956d7a1211438"} Nov 21 13:36:24 crc kubenswrapper[4675]: I1121 13:36:24.699828 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-frxvh" podUID="4b65642d-93cc-46c0-bba2-69047027f676" containerName="registry-server" containerID="cri-o://c22574d5fcbfedaeb47c6ab9b05187785294e4b7b4a55dee74cbf9412d5eb8a0" gracePeriod=2 Nov 21 13:36:24 crc kubenswrapper[4675]: I1121 13:36:24.740998 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:36:25 crc kubenswrapper[4675]: I1121 13:36:25.707485 4675 generic.go:334] "Generic (PLEG): container finished" podID="864e7914-4c9e-4b74-aa0c-c363b3b9a01f" containerID="643f0037aabd3a3b1a307cbd92c77ba91336c1d1eb26c7ed94f956d7a1211438" exitCode=0 Nov 21 13:36:25 crc kubenswrapper[4675]: I1121 13:36:25.707569 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25kf4" event={"ID":"864e7914-4c9e-4b74-aa0c-c363b3b9a01f","Type":"ContainerDied","Data":"643f0037aabd3a3b1a307cbd92c77ba91336c1d1eb26c7ed94f956d7a1211438"} Nov 21 13:36:25 crc kubenswrapper[4675]: I1121 13:36:25.711810 4675 generic.go:334] "Generic (PLEG): container finished" podID="4b65642d-93cc-46c0-bba2-69047027f676" containerID="c22574d5fcbfedaeb47c6ab9b05187785294e4b7b4a55dee74cbf9412d5eb8a0" exitCode=0 Nov 21 13:36:25 crc kubenswrapper[4675]: I1121 13:36:25.711880 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frxvh" event={"ID":"4b65642d-93cc-46c0-bba2-69047027f676","Type":"ContainerDied","Data":"c22574d5fcbfedaeb47c6ab9b05187785294e4b7b4a55dee74cbf9412d5eb8a0"} Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.230337 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.396986 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b65642d-93cc-46c0-bba2-69047027f676-utilities\") pod \"4b65642d-93cc-46c0-bba2-69047027f676\" (UID: \"4b65642d-93cc-46c0-bba2-69047027f676\") " Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.397130 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbmw5\" (UniqueName: \"kubernetes.io/projected/4b65642d-93cc-46c0-bba2-69047027f676-kube-api-access-qbmw5\") pod \"4b65642d-93cc-46c0-bba2-69047027f676\" (UID: \"4b65642d-93cc-46c0-bba2-69047027f676\") " Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.397160 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b65642d-93cc-46c0-bba2-69047027f676-catalog-content\") pod \"4b65642d-93cc-46c0-bba2-69047027f676\" (UID: \"4b65642d-93cc-46c0-bba2-69047027f676\") " Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.398138 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b65642d-93cc-46c0-bba2-69047027f676-utilities" (OuterVolumeSpecName: "utilities") pod "4b65642d-93cc-46c0-bba2-69047027f676" (UID: "4b65642d-93cc-46c0-bba2-69047027f676"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.402171 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b65642d-93cc-46c0-bba2-69047027f676-kube-api-access-qbmw5" (OuterVolumeSpecName: "kube-api-access-qbmw5") pod "4b65642d-93cc-46c0-bba2-69047027f676" (UID: "4b65642d-93cc-46c0-bba2-69047027f676"). InnerVolumeSpecName "kube-api-access-qbmw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.498560 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbmw5\" (UniqueName: \"kubernetes.io/projected/4b65642d-93cc-46c0-bba2-69047027f676-kube-api-access-qbmw5\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.498600 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b65642d-93cc-46c0-bba2-69047027f676-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.719183 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frxvh" event={"ID":"4b65642d-93cc-46c0-bba2-69047027f676","Type":"ContainerDied","Data":"eb41aec99529d720d9834d520a4c7dee05b221e76c1ccd0ebe02ae4f233430e4"} Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.719248 4675 scope.go:117] "RemoveContainer" containerID="c22574d5fcbfedaeb47c6ab9b05187785294e4b7b4a55dee74cbf9412d5eb8a0" Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.719207 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frxvh" Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.739409 4675 scope.go:117] "RemoveContainer" containerID="41a3fe002150d58c927efa4554879be6ae8e5e1b42d1718634599b37677dcb26" Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.753957 4675 scope.go:117] "RemoveContainer" containerID="e71077dfac040065c7a025d0bc44a1e1e6159bc301d19d1ce9b0414ad23367ff" Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.848400 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b65642d-93cc-46c0-bba2-69047027f676-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b65642d-93cc-46c0-bba2-69047027f676" (UID: "4b65642d-93cc-46c0-bba2-69047027f676"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:26 crc kubenswrapper[4675]: I1121 13:36:26.904284 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b65642d-93cc-46c0-bba2-69047027f676-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:27 crc kubenswrapper[4675]: I1121 13:36:27.038907 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-frxvh"] Nov 21 13:36:27 crc kubenswrapper[4675]: I1121 13:36:27.042137 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-frxvh"] Nov 21 13:36:28 crc kubenswrapper[4675]: I1121 13:36:28.858616 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b65642d-93cc-46c0-bba2-69047027f676" path="/var/lib/kubelet/pods/4b65642d-93cc-46c0-bba2-69047027f676/volumes" Nov 21 13:36:30 crc kubenswrapper[4675]: I1121 13:36:30.545139 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:36:31 crc kubenswrapper[4675]: I1121 13:36:31.744719 4675 generic.go:334] "Generic (PLEG): container finished" podID="8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" containerID="69b5b8bec82d771010389112c944076ab871275a43c352e2fa0b74a92742c802" exitCode=0 Nov 21 13:36:31 crc kubenswrapper[4675]: I1121 13:36:31.744812 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj6fs" event={"ID":"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c","Type":"ContainerDied","Data":"69b5b8bec82d771010389112c944076ab871275a43c352e2fa0b74a92742c802"} Nov 21 13:36:31 crc kubenswrapper[4675]: I1121 13:36:31.747404 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25kf4" event={"ID":"864e7914-4c9e-4b74-aa0c-c363b3b9a01f","Type":"ContainerStarted","Data":"a5f34386a52de1623853534b83ac3fbd044fbcd410c3f90fb1a7403928b10c9c"} Nov 21 13:36:31 crc kubenswrapper[4675]: I1121 13:36:31.749552 4675 generic.go:334] "Generic (PLEG): container finished" podID="7836be34-1937-4875-849d-f5f7655e7268" containerID="d44c62e27c09a37facc897e779fb2a11067fd4f0431a570adbb033da9c854653" exitCode=0 Nov 21 13:36:31 crc kubenswrapper[4675]: I1121 13:36:31.749595 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gppnb" event={"ID":"7836be34-1937-4875-849d-f5f7655e7268","Type":"ContainerDied","Data":"d44c62e27c09a37facc897e779fb2a11067fd4f0431a570adbb033da9c854653"} Nov 21 13:36:31 crc kubenswrapper[4675]: I1121 13:36:31.753137 4675 generic.go:334] "Generic (PLEG): container finished" podID="04c7001c-ca6f-43a9-b828-02697c5e581a" containerID="677707809afa28f49424fcadb3d0868a6d4005248da74c7d9547d5b994336e84" exitCode=0 Nov 21 13:36:31 crc kubenswrapper[4675]: I1121 13:36:31.753200 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztth5" event={"ID":"04c7001c-ca6f-43a9-b828-02697c5e581a","Type":"ContainerDied","Data":"677707809afa28f49424fcadb3d0868a6d4005248da74c7d9547d5b994336e84"} Nov 21 13:36:31 crc kubenswrapper[4675]: I1121 13:36:31.755349 4675 generic.go:334] "Generic (PLEG): container finished" podID="606fd1fb-4bb3-434c-a004-07233720375a" containerID="66a917a587875465aa293b992e86f7949c9764d5583a22afcde6398066c0b909" exitCode=0 Nov 21 13:36:31 crc kubenswrapper[4675]: I1121 13:36:31.755385 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhmq8" event={"ID":"606fd1fb-4bb3-434c-a004-07233720375a","Type":"ContainerDied","Data":"66a917a587875465aa293b992e86f7949c9764d5583a22afcde6398066c0b909"} Nov 21 13:36:31 crc kubenswrapper[4675]: I1121 13:36:31.809329 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-25kf4" podStartSLOduration=5.524666719 podStartE2EDuration="1m18.809312618s" podCreationTimestamp="2025-11-21 13:35:13 +0000 UTC" firstStartedPulling="2025-11-21 13:35:17.3113817 +0000 UTC m=+194.037796427" lastFinishedPulling="2025-11-21 13:36:30.596027599 +0000 UTC m=+267.322442326" observedRunningTime="2025-11-21 13:36:31.805616097 +0000 UTC m=+268.532030824" watchObservedRunningTime="2025-11-21 13:36:31.809312618 +0000 UTC m=+268.535727335" Nov 21 13:36:32 crc kubenswrapper[4675]: I1121 13:36:32.761452 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhmq8" event={"ID":"606fd1fb-4bb3-434c-a004-07233720375a","Type":"ContainerStarted","Data":"3ed1a8a9cf5d05d494b3b7ca4984f31444415277cf7bf2c69e55716b618e5038"} Nov 21 13:36:32 crc kubenswrapper[4675]: I1121 13:36:32.763667 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gppnb" event={"ID":"7836be34-1937-4875-849d-f5f7655e7268","Type":"ContainerStarted","Data":"3e314a4172a85d0182fb8d562dd68a699fa9691ec7abffce91ceb72a57adb71d"} Nov 21 13:36:32 crc kubenswrapper[4675]: I1121 13:36:32.765190 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztth5" event={"ID":"04c7001c-ca6f-43a9-b828-02697c5e581a","Type":"ContainerStarted","Data":"282b033aa281d69b71955d7ec700f520a46cfd76dd6f321b9570f198298337ad"} Nov 21 13:36:32 crc kubenswrapper[4675]: I1121 13:36:32.781126 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lhmq8" podStartSLOduration=2.697273686 podStartE2EDuration="1m20.781056545s" podCreationTimestamp="2025-11-21 13:35:12 +0000 UTC" firstStartedPulling="2025-11-21 13:35:14.195491172 +0000 UTC m=+190.921905899" lastFinishedPulling="2025-11-21 13:36:32.279274031 +0000 UTC m=+269.005688758" observedRunningTime="2025-11-21 13:36:32.778231608 +0000 UTC m=+269.504646355" watchObservedRunningTime="2025-11-21 13:36:32.781056545 +0000 UTC m=+269.507471272" Nov 21 13:36:32 crc kubenswrapper[4675]: I1121 13:36:32.802885 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gppnb" podStartSLOduration=2.544759022 podStartE2EDuration="1m22.802867154s" podCreationTimestamp="2025-11-21 13:35:10 +0000 UTC" firstStartedPulling="2025-11-21 13:35:12.063293145 +0000 UTC m=+188.789707872" lastFinishedPulling="2025-11-21 13:36:32.321401277 +0000 UTC m=+269.047816004" observedRunningTime="2025-11-21 13:36:32.794237637 +0000 UTC m=+269.520652364" watchObservedRunningTime="2025-11-21 13:36:32.802867154 +0000 UTC m=+269.529281881" Nov 21 13:36:32 crc kubenswrapper[4675]: I1121 13:36:32.819158 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ztth5" podStartSLOduration=1.6566882 podStartE2EDuration="1m20.819141071s" podCreationTimestamp="2025-11-21 13:35:12 +0000 UTC" firstStartedPulling="2025-11-21 13:35:13.093401827 +0000 UTC m=+189.819816554" lastFinishedPulling="2025-11-21 13:36:32.255854698 +0000 UTC m=+268.982269425" observedRunningTime="2025-11-21 13:36:32.816954451 +0000 UTC m=+269.543369178" watchObservedRunningTime="2025-11-21 13:36:32.819141071 +0000 UTC m=+269.545555798" Nov 21 13:36:32 crc kubenswrapper[4675]: I1121 13:36:32.902385 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:36:32 crc kubenswrapper[4675]: I1121 13:36:32.902700 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:36:33 crc kubenswrapper[4675]: I1121 13:36:33.771368 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj6fs" event={"ID":"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c","Type":"ContainerStarted","Data":"6891e862bb596a9f02ee9b8713490b6454f513f645abfdc9e81250f3e694f5f0"} Nov 21 13:36:33 crc kubenswrapper[4675]: I1121 13:36:33.788540 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rj6fs" podStartSLOduration=3.465434621 podStartE2EDuration="1m23.788520744s" podCreationTimestamp="2025-11-21 13:35:10 +0000 UTC" firstStartedPulling="2025-11-21 13:35:12.074256659 +0000 UTC m=+188.800671386" lastFinishedPulling="2025-11-21 13:36:32.397342782 +0000 UTC m=+269.123757509" observedRunningTime="2025-11-21 13:36:33.787451455 +0000 UTC m=+270.513866182" watchObservedRunningTime="2025-11-21 13:36:33.788520744 +0000 UTC m=+270.514935471" Nov 21 13:36:33 crc kubenswrapper[4675]: I1121 13:36:33.945304 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-lhmq8" podUID="606fd1fb-4bb3-434c-a004-07233720375a" containerName="registry-server" probeResult="failure" output=< Nov 21 13:36:33 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 13:36:33 crc kubenswrapper[4675]: > Nov 21 13:36:34 crc kubenswrapper[4675]: I1121 13:36:34.106409 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:36:34 crc kubenswrapper[4675]: I1121 13:36:34.106451 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:36:35 crc kubenswrapper[4675]: I1121 13:36:35.142855 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-25kf4" podUID="864e7914-4c9e-4b74-aa0c-c363b3b9a01f" containerName="registry-server" probeResult="failure" output=< Nov 21 13:36:35 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 13:36:35 crc kubenswrapper[4675]: > Nov 21 13:36:40 crc kubenswrapper[4675]: I1121 13:36:40.789370 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:36:40 crc kubenswrapper[4675]: I1121 13:36:40.789750 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:36:40 crc kubenswrapper[4675]: I1121 13:36:40.825581 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:36:40 crc kubenswrapper[4675]: I1121 13:36:40.864104 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:36:41 crc kubenswrapper[4675]: I1121 13:36:41.132630 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:36:41 crc kubenswrapper[4675]: I1121 13:36:41.132676 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:36:41 crc kubenswrapper[4675]: I1121 13:36:41.192679 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:36:41 crc kubenswrapper[4675]: I1121 13:36:41.855059 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:36:42 crc kubenswrapper[4675]: I1121 13:36:42.499267 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:36:42 crc kubenswrapper[4675]: I1121 13:36:42.499322 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:36:42 crc kubenswrapper[4675]: I1121 13:36:42.545370 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:36:42 crc kubenswrapper[4675]: I1121 13:36:42.864045 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:36:42 crc kubenswrapper[4675]: I1121 13:36:42.951692 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:36:42 crc kubenswrapper[4675]: I1121 13:36:42.989649 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:36:43 crc kubenswrapper[4675]: I1121 13:36:43.035698 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gppnb"] Nov 21 13:36:43 crc kubenswrapper[4675]: I1121 13:36:43.144751 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tz4xg"] Nov 21 13:36:43 crc kubenswrapper[4675]: I1121 13:36:43.826897 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gppnb" podUID="7836be34-1937-4875-849d-f5f7655e7268" containerName="registry-server" containerID="cri-o://3e314a4172a85d0182fb8d562dd68a699fa9691ec7abffce91ceb72a57adb71d" gracePeriod=2 Nov 21 13:36:44 crc kubenswrapper[4675]: I1121 13:36:44.152322 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:36:44 crc kubenswrapper[4675]: I1121 13:36:44.205039 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:36:44 crc kubenswrapper[4675]: I1121 13:36:44.837720 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhmq8"] Nov 21 13:36:44 crc kubenswrapper[4675]: I1121 13:36:44.839082 4675 generic.go:334] "Generic (PLEG): container finished" podID="7836be34-1937-4875-849d-f5f7655e7268" containerID="3e314a4172a85d0182fb8d562dd68a699fa9691ec7abffce91ceb72a57adb71d" exitCode=0 Nov 21 13:36:44 crc kubenswrapper[4675]: I1121 13:36:44.839983 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gppnb" event={"ID":"7836be34-1937-4875-849d-f5f7655e7268","Type":"ContainerDied","Data":"3e314a4172a85d0182fb8d562dd68a699fa9691ec7abffce91ceb72a57adb71d"} Nov 21 13:36:44 crc kubenswrapper[4675]: I1121 13:36:44.840145 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lhmq8" podUID="606fd1fb-4bb3-434c-a004-07233720375a" containerName="registry-server" containerID="cri-o://3ed1a8a9cf5d05d494b3b7ca4984f31444415277cf7bf2c69e55716b618e5038" gracePeriod=2 Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.653839 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.674605 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgjv8\" (UniqueName: \"kubernetes.io/projected/7836be34-1937-4875-849d-f5f7655e7268-kube-api-access-cgjv8\") pod \"7836be34-1937-4875-849d-f5f7655e7268\" (UID: \"7836be34-1937-4875-849d-f5f7655e7268\") " Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.674692 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7836be34-1937-4875-849d-f5f7655e7268-catalog-content\") pod \"7836be34-1937-4875-849d-f5f7655e7268\" (UID: \"7836be34-1937-4875-849d-f5f7655e7268\") " Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.675220 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7836be34-1937-4875-849d-f5f7655e7268-utilities\") pod \"7836be34-1937-4875-849d-f5f7655e7268\" (UID: \"7836be34-1937-4875-849d-f5f7655e7268\") " Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.676537 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7836be34-1937-4875-849d-f5f7655e7268-utilities" (OuterVolumeSpecName: "utilities") pod "7836be34-1937-4875-849d-f5f7655e7268" (UID: "7836be34-1937-4875-849d-f5f7655e7268"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.681217 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7836be34-1937-4875-849d-f5f7655e7268-kube-api-access-cgjv8" (OuterVolumeSpecName: "kube-api-access-cgjv8") pod "7836be34-1937-4875-849d-f5f7655e7268" (UID: "7836be34-1937-4875-849d-f5f7655e7268"). InnerVolumeSpecName "kube-api-access-cgjv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.681325 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7836be34-1937-4875-849d-f5f7655e7268-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.681360 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgjv8\" (UniqueName: \"kubernetes.io/projected/7836be34-1937-4875-849d-f5f7655e7268-kube-api-access-cgjv8\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.724832 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7836be34-1937-4875-849d-f5f7655e7268-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7836be34-1937-4875-849d-f5f7655e7268" (UID: "7836be34-1937-4875-849d-f5f7655e7268"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.782140 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7836be34-1937-4875-849d-f5f7655e7268-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.846636 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gppnb" event={"ID":"7836be34-1937-4875-849d-f5f7655e7268","Type":"ContainerDied","Data":"eeb892405e466f34278fa5d1a29df943575c73ae21a3fd6ae04b1ed5085877a9"} Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.846689 4675 scope.go:117] "RemoveContainer" containerID="3e314a4172a85d0182fb8d562dd68a699fa9691ec7abffce91ceb72a57adb71d" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.846819 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gppnb" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.855992 4675 generic.go:334] "Generic (PLEG): container finished" podID="606fd1fb-4bb3-434c-a004-07233720375a" containerID="3ed1a8a9cf5d05d494b3b7ca4984f31444415277cf7bf2c69e55716b618e5038" exitCode=0 Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.856028 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhmq8" event={"ID":"606fd1fb-4bb3-434c-a004-07233720375a","Type":"ContainerDied","Data":"3ed1a8a9cf5d05d494b3b7ca4984f31444415277cf7bf2c69e55716b618e5038"} Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.876517 4675 scope.go:117] "RemoveContainer" containerID="d44c62e27c09a37facc897e779fb2a11067fd4f0431a570adbb033da9c854653" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.876600 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.883187 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6d2x\" (UniqueName: \"kubernetes.io/projected/606fd1fb-4bb3-434c-a004-07233720375a-kube-api-access-n6d2x\") pod \"606fd1fb-4bb3-434c-a004-07233720375a\" (UID: \"606fd1fb-4bb3-434c-a004-07233720375a\") " Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.884125 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606fd1fb-4bb3-434c-a004-07233720375a-utilities\") pod \"606fd1fb-4bb3-434c-a004-07233720375a\" (UID: \"606fd1fb-4bb3-434c-a004-07233720375a\") " Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.884173 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606fd1fb-4bb3-434c-a004-07233720375a-catalog-content\") pod \"606fd1fb-4bb3-434c-a004-07233720375a\" (UID: \"606fd1fb-4bb3-434c-a004-07233720375a\") " Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.884491 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/606fd1fb-4bb3-434c-a004-07233720375a-utilities" (OuterVolumeSpecName: "utilities") pod "606fd1fb-4bb3-434c-a004-07233720375a" (UID: "606fd1fb-4bb3-434c-a004-07233720375a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.886385 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/606fd1fb-4bb3-434c-a004-07233720375a-kube-api-access-n6d2x" (OuterVolumeSpecName: "kube-api-access-n6d2x") pod "606fd1fb-4bb3-434c-a004-07233720375a" (UID: "606fd1fb-4bb3-434c-a004-07233720375a"). InnerVolumeSpecName "kube-api-access-n6d2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.893801 4675 scope.go:117] "RemoveContainer" containerID="91f0ecea10cad49c1e660636154d2ddc32c5a4b597edd330d380f70b1545f65a" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.911606 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gppnb"] Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.915372 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gppnb"] Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.917311 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/606fd1fb-4bb3-434c-a004-07233720375a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "606fd1fb-4bb3-434c-a004-07233720375a" (UID: "606fd1fb-4bb3-434c-a004-07233720375a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.985587 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6d2x\" (UniqueName: \"kubernetes.io/projected/606fd1fb-4bb3-434c-a004-07233720375a-kube-api-access-n6d2x\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.985856 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606fd1fb-4bb3-434c-a004-07233720375a-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:45 crc kubenswrapper[4675]: I1121 13:36:45.985918 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606fd1fb-4bb3-434c-a004-07233720375a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:46 crc kubenswrapper[4675]: I1121 13:36:46.855749 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7836be34-1937-4875-849d-f5f7655e7268" path="/var/lib/kubelet/pods/7836be34-1937-4875-849d-f5f7655e7268/volumes" Nov 21 13:36:46 crc kubenswrapper[4675]: I1121 13:36:46.863961 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhmq8" event={"ID":"606fd1fb-4bb3-434c-a004-07233720375a","Type":"ContainerDied","Data":"975286d8b9bbbfb7ee3aac0edb86092e50ce454697a6e3097899ddd4a9eaa87a"} Nov 21 13:36:46 crc kubenswrapper[4675]: I1121 13:36:46.864016 4675 scope.go:117] "RemoveContainer" containerID="3ed1a8a9cf5d05d494b3b7ca4984f31444415277cf7bf2c69e55716b618e5038" Nov 21 13:36:46 crc kubenswrapper[4675]: I1121 13:36:46.864052 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhmq8" Nov 21 13:36:46 crc kubenswrapper[4675]: I1121 13:36:46.885623 4675 scope.go:117] "RemoveContainer" containerID="66a917a587875465aa293b992e86f7949c9764d5583a22afcde6398066c0b909" Nov 21 13:36:46 crc kubenswrapper[4675]: I1121 13:36:46.887869 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhmq8"] Nov 21 13:36:46 crc kubenswrapper[4675]: I1121 13:36:46.892656 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhmq8"] Nov 21 13:36:46 crc kubenswrapper[4675]: I1121 13:36:46.901059 4675 scope.go:117] "RemoveContainer" containerID="15a8a3082a296bf218ab6d19a305d95ffd196911cdda02c9f27889e8ef0ab269" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.235515 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-25kf4"] Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.235811 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-25kf4" podUID="864e7914-4c9e-4b74-aa0c-c363b3b9a01f" containerName="registry-server" containerID="cri-o://a5f34386a52de1623853534b83ac3fbd044fbcd410c3f90fb1a7403928b10c9c" gracePeriod=2 Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.562971 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.701632 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9nzg\" (UniqueName: \"kubernetes.io/projected/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-kube-api-access-h9nzg\") pod \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\" (UID: \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\") " Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.701726 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-utilities\") pod \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\" (UID: \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\") " Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.701837 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-catalog-content\") pod \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\" (UID: \"864e7914-4c9e-4b74-aa0c-c363b3b9a01f\") " Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.702565 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-utilities" (OuterVolumeSpecName: "utilities") pod "864e7914-4c9e-4b74-aa0c-c363b3b9a01f" (UID: "864e7914-4c9e-4b74-aa0c-c363b3b9a01f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.706564 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-kube-api-access-h9nzg" (OuterVolumeSpecName: "kube-api-access-h9nzg") pod "864e7914-4c9e-4b74-aa0c-c363b3b9a01f" (UID: "864e7914-4c9e-4b74-aa0c-c363b3b9a01f"). InnerVolumeSpecName "kube-api-access-h9nzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.786597 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "864e7914-4c9e-4b74-aa0c-c363b3b9a01f" (UID: "864e7914-4c9e-4b74-aa0c-c363b3b9a01f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.802825 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9nzg\" (UniqueName: \"kubernetes.io/projected/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-kube-api-access-h9nzg\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.802860 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.802870 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864e7914-4c9e-4b74-aa0c-c363b3b9a01f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.870861 4675 generic.go:334] "Generic (PLEG): container finished" podID="864e7914-4c9e-4b74-aa0c-c363b3b9a01f" containerID="a5f34386a52de1623853534b83ac3fbd044fbcd410c3f90fb1a7403928b10c9c" exitCode=0 Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.870909 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25kf4" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.870920 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25kf4" event={"ID":"864e7914-4c9e-4b74-aa0c-c363b3b9a01f","Type":"ContainerDied","Data":"a5f34386a52de1623853534b83ac3fbd044fbcd410c3f90fb1a7403928b10c9c"} Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.870990 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25kf4" event={"ID":"864e7914-4c9e-4b74-aa0c-c363b3b9a01f","Type":"ContainerDied","Data":"9ee3e3994aab037beeb9a211fef96f3f0b0c6b80d2232af7c221075d876cceff"} Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.871010 4675 scope.go:117] "RemoveContainer" containerID="a5f34386a52de1623853534b83ac3fbd044fbcd410c3f90fb1a7403928b10c9c" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.884862 4675 scope.go:117] "RemoveContainer" containerID="643f0037aabd3a3b1a307cbd92c77ba91336c1d1eb26c7ed94f956d7a1211438" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.897942 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-25kf4"] Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.906105 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-25kf4"] Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.914893 4675 scope.go:117] "RemoveContainer" containerID="2c49ec50d79464c86e2e03bc203785a31d7621daa0c2455afd3e89c11d32f9fd" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.927824 4675 scope.go:117] "RemoveContainer" containerID="a5f34386a52de1623853534b83ac3fbd044fbcd410c3f90fb1a7403928b10c9c" Nov 21 13:36:47 crc kubenswrapper[4675]: E1121 13:36:47.928306 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5f34386a52de1623853534b83ac3fbd044fbcd410c3f90fb1a7403928b10c9c\": container with ID starting with a5f34386a52de1623853534b83ac3fbd044fbcd410c3f90fb1a7403928b10c9c not found: ID does not exist" containerID="a5f34386a52de1623853534b83ac3fbd044fbcd410c3f90fb1a7403928b10c9c" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.928339 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5f34386a52de1623853534b83ac3fbd044fbcd410c3f90fb1a7403928b10c9c"} err="failed to get container status \"a5f34386a52de1623853534b83ac3fbd044fbcd410c3f90fb1a7403928b10c9c\": rpc error: code = NotFound desc = could not find container \"a5f34386a52de1623853534b83ac3fbd044fbcd410c3f90fb1a7403928b10c9c\": container with ID starting with a5f34386a52de1623853534b83ac3fbd044fbcd410c3f90fb1a7403928b10c9c not found: ID does not exist" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.928365 4675 scope.go:117] "RemoveContainer" containerID="643f0037aabd3a3b1a307cbd92c77ba91336c1d1eb26c7ed94f956d7a1211438" Nov 21 13:36:47 crc kubenswrapper[4675]: E1121 13:36:47.928619 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"643f0037aabd3a3b1a307cbd92c77ba91336c1d1eb26c7ed94f956d7a1211438\": container with ID starting with 643f0037aabd3a3b1a307cbd92c77ba91336c1d1eb26c7ed94f956d7a1211438 not found: ID does not exist" containerID="643f0037aabd3a3b1a307cbd92c77ba91336c1d1eb26c7ed94f956d7a1211438" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.928657 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"643f0037aabd3a3b1a307cbd92c77ba91336c1d1eb26c7ed94f956d7a1211438"} err="failed to get container status \"643f0037aabd3a3b1a307cbd92c77ba91336c1d1eb26c7ed94f956d7a1211438\": rpc error: code = NotFound desc = could not find container \"643f0037aabd3a3b1a307cbd92c77ba91336c1d1eb26c7ed94f956d7a1211438\": container with ID starting with 643f0037aabd3a3b1a307cbd92c77ba91336c1d1eb26c7ed94f956d7a1211438 not found: ID does not exist" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.928690 4675 scope.go:117] "RemoveContainer" containerID="2c49ec50d79464c86e2e03bc203785a31d7621daa0c2455afd3e89c11d32f9fd" Nov 21 13:36:47 crc kubenswrapper[4675]: E1121 13:36:47.928956 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c49ec50d79464c86e2e03bc203785a31d7621daa0c2455afd3e89c11d32f9fd\": container with ID starting with 2c49ec50d79464c86e2e03bc203785a31d7621daa0c2455afd3e89c11d32f9fd not found: ID does not exist" containerID="2c49ec50d79464c86e2e03bc203785a31d7621daa0c2455afd3e89c11d32f9fd" Nov 21 13:36:47 crc kubenswrapper[4675]: I1121 13:36:47.928982 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c49ec50d79464c86e2e03bc203785a31d7621daa0c2455afd3e89c11d32f9fd"} err="failed to get container status \"2c49ec50d79464c86e2e03bc203785a31d7621daa0c2455afd3e89c11d32f9fd\": rpc error: code = NotFound desc = could not find container \"2c49ec50d79464c86e2e03bc203785a31d7621daa0c2455afd3e89c11d32f9fd\": container with ID starting with 2c49ec50d79464c86e2e03bc203785a31d7621daa0c2455afd3e89c11d32f9fd not found: ID does not exist" Nov 21 13:36:47 crc kubenswrapper[4675]: E1121 13:36:47.949439 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod864e7914_4c9e_4b74_aa0c_c363b3b9a01f.slice/crio-9ee3e3994aab037beeb9a211fef96f3f0b0c6b80d2232af7c221075d876cceff\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod864e7914_4c9e_4b74_aa0c_c363b3b9a01f.slice\": RecentStats: unable to find data in memory cache]" Nov 21 13:36:48 crc kubenswrapper[4675]: I1121 13:36:48.856180 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="606fd1fb-4bb3-434c-a004-07233720375a" path="/var/lib/kubelet/pods/606fd1fb-4bb3-434c-a004-07233720375a/volumes" Nov 21 13:36:48 crc kubenswrapper[4675]: I1121 13:36:48.857181 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="864e7914-4c9e-4b74-aa0c-c363b3b9a01f" path="/var/lib/kubelet/pods/864e7914-4c9e-4b74-aa0c-c363b3b9a01f/volumes" Nov 21 13:36:51 crc kubenswrapper[4675]: I1121 13:36:51.952300 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:36:51 crc kubenswrapper[4675]: I1121 13:36:51.952354 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:36:51 crc kubenswrapper[4675]: I1121 13:36:51.952405 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:36:51 crc kubenswrapper[4675]: I1121 13:36:51.952438 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:36:51 crc kubenswrapper[4675]: I1121 13:36:51.955210 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 21 13:36:51 crc kubenswrapper[4675]: I1121 13:36:51.955287 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 21 13:36:51 crc kubenswrapper[4675]: I1121 13:36:51.956804 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 21 13:36:51 crc kubenswrapper[4675]: I1121 13:36:51.964530 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:36:51 crc kubenswrapper[4675]: I1121 13:36:51.964753 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 21 13:36:51 crc kubenswrapper[4675]: I1121 13:36:51.967768 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:36:51 crc kubenswrapper[4675]: I1121 13:36:51.976823 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:36:51 crc kubenswrapper[4675]: I1121 13:36:51.976912 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:36:51 crc kubenswrapper[4675]: I1121 13:36:51.980569 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:36:51 crc kubenswrapper[4675]: I1121 13:36:51.986055 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:36:52 crc kubenswrapper[4675]: I1121 13:36:52.271118 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:36:52 crc kubenswrapper[4675]: W1121 13:36:52.382620 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-375f368b93a324032a71a53ee76513fda7ebb99de14a1c8f688e68d7806fbbcb WatchSource:0}: Error finding container 375f368b93a324032a71a53ee76513fda7ebb99de14a1c8f688e68d7806fbbcb: Status 404 returned error can't find the container with id 375f368b93a324032a71a53ee76513fda7ebb99de14a1c8f688e68d7806fbbcb Nov 21 13:36:53 crc kubenswrapper[4675]: I1121 13:36:52.897806 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6b43aa5e66988083db0ce08ec7f1d64f3f8b27338e06dfe6b28705c1f0360ca9"} Nov 21 13:36:53 crc kubenswrapper[4675]: I1121 13:36:52.897847 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"60f2a24ce1421dae952e562853125a44000c5bce3db87a362615c9b45afe7423"} Nov 21 13:36:53 crc kubenswrapper[4675]: I1121 13:36:52.899084 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"375f368b93a324032a71a53ee76513fda7ebb99de14a1c8f688e68d7806fbbcb"} Nov 21 13:36:53 crc kubenswrapper[4675]: I1121 13:36:52.900027 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"63fc15f465e37d90ee09439fd426ed0fffecfc09f31a4a48c35b63807f6c141b"} Nov 21 13:36:53 crc kubenswrapper[4675]: I1121 13:36:53.907252 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"077e1160b87ed4252bc3b86b18973a86a7f41fb7fd3b4baa53dff79d82988ea4"} Nov 21 13:36:53 crc kubenswrapper[4675]: I1121 13:36:53.907459 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:36:53 crc kubenswrapper[4675]: I1121 13:36:53.910218 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"75ea113f701f466d6f7cf7779a1987e6b8a45570e15da7d17aa4f157bdcfea30"} Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.174120 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" podUID="8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" containerName="oauth-openshift" containerID="cri-o://f4cff351d8b77f03492097c5c4d87fd4b4f4c8132d125028951840c4a3170ef5" gracePeriod=15 Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.506532 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533248 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6cb668d466-6k52j"] Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533508 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7836be34-1937-4875-849d-f5f7655e7268" containerName="registry-server" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533523 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7836be34-1937-4875-849d-f5f7655e7268" containerName="registry-server" Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533534 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864e7914-4c9e-4b74-aa0c-c363b3b9a01f" containerName="extract-content" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533542 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="864e7914-4c9e-4b74-aa0c-c363b3b9a01f" containerName="extract-content" Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533551 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606fd1fb-4bb3-434c-a004-07233720375a" containerName="extract-utilities" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533561 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="606fd1fb-4bb3-434c-a004-07233720375a" containerName="extract-utilities" Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533570 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7836be34-1937-4875-849d-f5f7655e7268" containerName="extract-utilities" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533577 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7836be34-1937-4875-849d-f5f7655e7268" containerName="extract-utilities" Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533590 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864e7914-4c9e-4b74-aa0c-c363b3b9a01f" containerName="registry-server" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533597 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="864e7914-4c9e-4b74-aa0c-c363b3b9a01f" containerName="registry-server" Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533607 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b65642d-93cc-46c0-bba2-69047027f676" containerName="extract-content" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533614 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b65642d-93cc-46c0-bba2-69047027f676" containerName="extract-content" Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533623 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" containerName="oauth-openshift" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533630 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" containerName="oauth-openshift" Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533640 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9748fc-dd1d-4029-b8ec-706d77ff7d0f" containerName="pruner" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533648 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9748fc-dd1d-4029-b8ec-706d77ff7d0f" containerName="pruner" Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533656 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606fd1fb-4bb3-434c-a004-07233720375a" containerName="extract-content" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533663 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="606fd1fb-4bb3-434c-a004-07233720375a" containerName="extract-content" Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533675 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b65642d-93cc-46c0-bba2-69047027f676" containerName="extract-utilities" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533682 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b65642d-93cc-46c0-bba2-69047027f676" containerName="extract-utilities" Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533693 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7836be34-1937-4875-849d-f5f7655e7268" containerName="extract-content" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533700 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7836be34-1937-4875-849d-f5f7655e7268" containerName="extract-content" Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533711 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b65642d-93cc-46c0-bba2-69047027f676" containerName="registry-server" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533719 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b65642d-93cc-46c0-bba2-69047027f676" containerName="registry-server" Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533733 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864e7914-4c9e-4b74-aa0c-c363b3b9a01f" containerName="extract-utilities" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533739 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="864e7914-4c9e-4b74-aa0c-c363b3b9a01f" containerName="extract-utilities" Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533749 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606fd1fb-4bb3-434c-a004-07233720375a" containerName="registry-server" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533756 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="606fd1fb-4bb3-434c-a004-07233720375a" containerName="registry-server" Nov 21 13:37:08 crc kubenswrapper[4675]: E1121 13:37:08.533768 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d7ca7b-c5d0-430e-a66a-bca384879ef5" containerName="pruner" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533775 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d7ca7b-c5d0-430e-a66a-bca384879ef5" containerName="pruner" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533883 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7836be34-1937-4875-849d-f5f7655e7268" containerName="registry-server" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533896 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d7ca7b-c5d0-430e-a66a-bca384879ef5" containerName="pruner" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533904 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="864e7914-4c9e-4b74-aa0c-c363b3b9a01f" containerName="registry-server" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533916 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" containerName="oauth-openshift" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533927 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9748fc-dd1d-4029-b8ec-706d77ff7d0f" containerName="pruner" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533936 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="606fd1fb-4bb3-434c-a004-07233720375a" containerName="registry-server" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.533949 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b65642d-93cc-46c0-bba2-69047027f676" containerName="registry-server" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.534405 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.550261 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6cb668d466-6k52j"] Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650468 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-audit-policies\") pod \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650515 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-serving-cert\") pod \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650554 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-provider-selection\") pod \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650581 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-router-certs\") pod \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650609 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-session\") pod \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650628 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-idp-0-file-data\") pod \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650643 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-login\") pod \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650659 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-service-ca\") pod \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650686 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-ocp-branding-template\") pod \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650717 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-cliconfig\") pod \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650742 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-error\") pod \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650767 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv89g\" (UniqueName: \"kubernetes.io/projected/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-kube-api-access-sv89g\") pod \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650788 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-trusted-ca-bundle\") pod \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650813 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-audit-dir\") pod \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\" (UID: \"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb\") " Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650918 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e39d58d9-6706-43f1-855b-73d6fc666e3c-audit-dir\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650938 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-user-template-login\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.650967 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.651003 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e39d58d9-6706-43f1-855b-73d6fc666e3c-audit-policies\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.651021 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-router-certs\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.651039 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-service-ca\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.651054 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-user-template-error\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.651106 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.651131 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.651156 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.651173 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.651206 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.651254 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-session\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.651296 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" (UID: "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.653020 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7v84\" (UniqueName: \"kubernetes.io/projected/e39d58d9-6706-43f1-855b-73d6fc666e3c-kube-api-access-x7v84\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.653103 4675 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.653133 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" (UID: "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.653173 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" (UID: "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.653277 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" (UID: "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.653406 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" (UID: "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.657033 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" (UID: "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.657260 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" (UID: "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.658304 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" (UID: "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.658546 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" (UID: "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.658784 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" (UID: "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.659006 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" (UID: "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.659356 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" (UID: "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.659458 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-kube-api-access-sv89g" (OuterVolumeSpecName: "kube-api-access-sv89g") pod "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" (UID: "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb"). InnerVolumeSpecName "kube-api-access-sv89g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.659688 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" (UID: "8acbfba2-82d1-4e2a-bd77-7f35f84d35eb"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754109 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754179 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e39d58d9-6706-43f1-855b-73d6fc666e3c-audit-policies\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754200 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-router-certs\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754218 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-service-ca\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754238 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-user-template-error\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754255 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754279 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754301 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754319 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754335 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754352 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-session\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754371 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7v84\" (UniqueName: \"kubernetes.io/projected/e39d58d9-6706-43f1-855b-73d6fc666e3c-kube-api-access-x7v84\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754390 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e39d58d9-6706-43f1-855b-73d6fc666e3c-audit-dir\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754405 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-user-template-login\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754446 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754461 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754482 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754497 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754510 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754521 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754531 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754541 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754550 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754559 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754569 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv89g\" (UniqueName: \"kubernetes.io/projected/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-kube-api-access-sv89g\") on node \"crc\" DevicePath \"\"" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754583 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.754597 4675 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.755163 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e39d58d9-6706-43f1-855b-73d6fc666e3c-audit-policies\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.755229 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e39d58d9-6706-43f1-855b-73d6fc666e3c-audit-dir\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.755249 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-service-ca\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.755821 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.755905 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.757925 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-user-template-error\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.758055 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.758197 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.758591 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-router-certs\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.758694 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.759121 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-user-template-login\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.759484 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-session\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.760162 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e39d58d9-6706-43f1-855b-73d6fc666e3c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.770366 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7v84\" (UniqueName: \"kubernetes.io/projected/e39d58d9-6706-43f1-855b-73d6fc666e3c-kube-api-access-x7v84\") pod \"oauth-openshift-6cb668d466-6k52j\" (UID: \"e39d58d9-6706-43f1-855b-73d6fc666e3c\") " pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.852145 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.982141 4675 generic.go:334] "Generic (PLEG): container finished" podID="8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" containerID="f4cff351d8b77f03492097c5c4d87fd4b4f4c8132d125028951840c4a3170ef5" exitCode=0 Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.982236 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.982258 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" event={"ID":"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb","Type":"ContainerDied","Data":"f4cff351d8b77f03492097c5c4d87fd4b4f4c8132d125028951840c4a3170ef5"} Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.982553 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tz4xg" event={"ID":"8acbfba2-82d1-4e2a-bd77-7f35f84d35eb","Type":"ContainerDied","Data":"0f02772ed2f2e60a8ba8b1f22d376c1927d836a120fa52026fe0547f2e9ab315"} Nov 21 13:37:08 crc kubenswrapper[4675]: I1121 13:37:08.982576 4675 scope.go:117] "RemoveContainer" containerID="f4cff351d8b77f03492097c5c4d87fd4b4f4c8132d125028951840c4a3170ef5" Nov 21 13:37:09 crc kubenswrapper[4675]: I1121 13:37:09.004636 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tz4xg"] Nov 21 13:37:09 crc kubenswrapper[4675]: I1121 13:37:09.005257 4675 scope.go:117] "RemoveContainer" containerID="f4cff351d8b77f03492097c5c4d87fd4b4f4c8132d125028951840c4a3170ef5" Nov 21 13:37:09 crc kubenswrapper[4675]: I1121 13:37:09.006133 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tz4xg"] Nov 21 13:37:09 crc kubenswrapper[4675]: E1121 13:37:09.006480 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4cff351d8b77f03492097c5c4d87fd4b4f4c8132d125028951840c4a3170ef5\": container with ID starting with f4cff351d8b77f03492097c5c4d87fd4b4f4c8132d125028951840c4a3170ef5 not found: ID does not exist" containerID="f4cff351d8b77f03492097c5c4d87fd4b4f4c8132d125028951840c4a3170ef5" Nov 21 13:37:09 crc kubenswrapper[4675]: I1121 13:37:09.006513 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4cff351d8b77f03492097c5c4d87fd4b4f4c8132d125028951840c4a3170ef5"} err="failed to get container status \"f4cff351d8b77f03492097c5c4d87fd4b4f4c8132d125028951840c4a3170ef5\": rpc error: code = NotFound desc = could not find container \"f4cff351d8b77f03492097c5c4d87fd4b4f4c8132d125028951840c4a3170ef5\": container with ID starting with f4cff351d8b77f03492097c5c4d87fd4b4f4c8132d125028951840c4a3170ef5 not found: ID does not exist" Nov 21 13:37:09 crc kubenswrapper[4675]: I1121 13:37:09.031889 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6cb668d466-6k52j"] Nov 21 13:37:09 crc kubenswrapper[4675]: W1121 13:37:09.041866 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode39d58d9_6706_43f1_855b_73d6fc666e3c.slice/crio-039c59e2222eb42dcac751cd9eccdf960445e2cc862e0a9e01cd9a3fbe8817a9 WatchSource:0}: Error finding container 039c59e2222eb42dcac751cd9eccdf960445e2cc862e0a9e01cd9a3fbe8817a9: Status 404 returned error can't find the container with id 039c59e2222eb42dcac751cd9eccdf960445e2cc862e0a9e01cd9a3fbe8817a9 Nov 21 13:37:09 crc kubenswrapper[4675]: I1121 13:37:09.989755 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" event={"ID":"e39d58d9-6706-43f1-855b-73d6fc666e3c","Type":"ContainerStarted","Data":"f78c45ae99e638c91b7abb594f628798168235a1265fd6b14877ffec91da0dee"} Nov 21 13:37:09 crc kubenswrapper[4675]: I1121 13:37:09.990929 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:09 crc kubenswrapper[4675]: I1121 13:37:09.990952 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" event={"ID":"e39d58d9-6706-43f1-855b-73d6fc666e3c","Type":"ContainerStarted","Data":"039c59e2222eb42dcac751cd9eccdf960445e2cc862e0a9e01cd9a3fbe8817a9"} Nov 21 13:37:10 crc kubenswrapper[4675]: I1121 13:37:10.020775 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" podStartSLOduration=27.02075884 podStartE2EDuration="27.02075884s" podCreationTimestamp="2025-11-21 13:36:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:37:10.019715452 +0000 UTC m=+306.746130189" watchObservedRunningTime="2025-11-21 13:37:10.02075884 +0000 UTC m=+306.747173567" Nov 21 13:37:10 crc kubenswrapper[4675]: I1121 13:37:10.309930 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6cb668d466-6k52j" Nov 21 13:37:10 crc kubenswrapper[4675]: I1121 13:37:10.856106 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8acbfba2-82d1-4e2a-bd77-7f35f84d35eb" path="/var/lib/kubelet/pods/8acbfba2-82d1-4e2a-bd77-7f35f84d35eb/volumes" Nov 21 13:37:32 crc kubenswrapper[4675]: I1121 13:37:32.288853 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:38:16 crc kubenswrapper[4675]: I1121 13:38:16.136830 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:38:16 crc kubenswrapper[4675]: I1121 13:38:16.137466 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.151855 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fjgjf"] Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.153568 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fjgjf" podUID="38f5849e-24d7-4fb4-8c2b-14c748f61f03" containerName="registry-server" containerID="cri-o://416035169cbc201d0c7a7193957000cf7ea61e2108093ad3fb246479d1bc69ab" gracePeriod=30 Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.170265 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rj6fs"] Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.170847 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rj6fs" podUID="8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" containerName="registry-server" containerID="cri-o://6891e862bb596a9f02ee9b8713490b6454f513f645abfdc9e81250f3e694f5f0" gracePeriod=30 Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.183369 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r5svl"] Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.183580 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" podUID="39e32a09-8172-443c-bd56-00a536a06de2" containerName="marketplace-operator" containerID="cri-o://9573c1d363c480b9b13406b9b975bfc8522f3c8089f7e4612ef0dd36d80a5e20" gracePeriod=30 Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.184951 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztth5"] Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.185208 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ztth5" podUID="04c7001c-ca6f-43a9-b828-02697c5e581a" containerName="registry-server" containerID="cri-o://282b033aa281d69b71955d7ec700f520a46cfd76dd6f321b9570f198298337ad" gracePeriod=30 Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.191419 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjkqd"] Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.191686 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rjkqd" podUID="69663c02-7b8c-478d-8975-79fae4dbadea" containerName="registry-server" containerID="cri-o://a0cbb60ba6223e1220ea7e4035058e5c622690a46b4289bfbd5ab306e9d81ad5" gracePeriod=30 Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.195597 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6q9sj"] Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.199909 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.215045 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6q9sj"] Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.320270 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a21daba-95a0-4f20-91b5-de4dc44aa0b1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6q9sj\" (UID: \"6a21daba-95a0-4f20-91b5-de4dc44aa0b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.320454 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6a21daba-95a0-4f20-91b5-de4dc44aa0b1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6q9sj\" (UID: \"6a21daba-95a0-4f20-91b5-de4dc44aa0b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.320511 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56xxt\" (UniqueName: \"kubernetes.io/projected/6a21daba-95a0-4f20-91b5-de4dc44aa0b1-kube-api-access-56xxt\") pod \"marketplace-operator-79b997595-6q9sj\" (UID: \"6a21daba-95a0-4f20-91b5-de4dc44aa0b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" Nov 21 13:38:39 crc kubenswrapper[4675]: E1121 13:38:39.346622 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69663c02_7b8c_478d_8975_79fae4dbadea.slice/crio-a0cbb60ba6223e1220ea7e4035058e5c622690a46b4289bfbd5ab306e9d81ad5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69663c02_7b8c_478d_8975_79fae4dbadea.slice/crio-conmon-a0cbb60ba6223e1220ea7e4035058e5c622690a46b4289bfbd5ab306e9d81ad5.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.421938 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a21daba-95a0-4f20-91b5-de4dc44aa0b1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6q9sj\" (UID: \"6a21daba-95a0-4f20-91b5-de4dc44aa0b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.421984 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6a21daba-95a0-4f20-91b5-de4dc44aa0b1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6q9sj\" (UID: \"6a21daba-95a0-4f20-91b5-de4dc44aa0b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.422038 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56xxt\" (UniqueName: \"kubernetes.io/projected/6a21daba-95a0-4f20-91b5-de4dc44aa0b1-kube-api-access-56xxt\") pod \"marketplace-operator-79b997595-6q9sj\" (UID: \"6a21daba-95a0-4f20-91b5-de4dc44aa0b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.423389 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a21daba-95a0-4f20-91b5-de4dc44aa0b1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6q9sj\" (UID: \"6a21daba-95a0-4f20-91b5-de4dc44aa0b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.434791 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6a21daba-95a0-4f20-91b5-de4dc44aa0b1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6q9sj\" (UID: \"6a21daba-95a0-4f20-91b5-de4dc44aa0b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.441604 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56xxt\" (UniqueName: \"kubernetes.io/projected/6a21daba-95a0-4f20-91b5-de4dc44aa0b1-kube-api-access-56xxt\") pod \"marketplace-operator-79b997595-6q9sj\" (UID: \"6a21daba-95a0-4f20-91b5-de4dc44aa0b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.484982 4675 generic.go:334] "Generic (PLEG): container finished" podID="69663c02-7b8c-478d-8975-79fae4dbadea" containerID="a0cbb60ba6223e1220ea7e4035058e5c622690a46b4289bfbd5ab306e9d81ad5" exitCode=0 Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.485060 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjkqd" event={"ID":"69663c02-7b8c-478d-8975-79fae4dbadea","Type":"ContainerDied","Data":"a0cbb60ba6223e1220ea7e4035058e5c622690a46b4289bfbd5ab306e9d81ad5"} Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.487128 4675 generic.go:334] "Generic (PLEG): container finished" podID="04c7001c-ca6f-43a9-b828-02697c5e581a" containerID="282b033aa281d69b71955d7ec700f520a46cfd76dd6f321b9570f198298337ad" exitCode=0 Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.487165 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztth5" event={"ID":"04c7001c-ca6f-43a9-b828-02697c5e581a","Type":"ContainerDied","Data":"282b033aa281d69b71955d7ec700f520a46cfd76dd6f321b9570f198298337ad"} Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.492154 4675 generic.go:334] "Generic (PLEG): container finished" podID="8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" containerID="6891e862bb596a9f02ee9b8713490b6454f513f645abfdc9e81250f3e694f5f0" exitCode=0 Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.492226 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj6fs" event={"ID":"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c","Type":"ContainerDied","Data":"6891e862bb596a9f02ee9b8713490b6454f513f645abfdc9e81250f3e694f5f0"} Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.494471 4675 generic.go:334] "Generic (PLEG): container finished" podID="39e32a09-8172-443c-bd56-00a536a06de2" containerID="9573c1d363c480b9b13406b9b975bfc8522f3c8089f7e4612ef0dd36d80a5e20" exitCode=0 Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.494503 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" event={"ID":"39e32a09-8172-443c-bd56-00a536a06de2","Type":"ContainerDied","Data":"9573c1d363c480b9b13406b9b975bfc8522f3c8089f7e4612ef0dd36d80a5e20"} Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.495900 4675 generic.go:334] "Generic (PLEG): container finished" podID="38f5849e-24d7-4fb4-8c2b-14c748f61f03" containerID="416035169cbc201d0c7a7193957000cf7ea61e2108093ad3fb246479d1bc69ab" exitCode=0 Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.495922 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjgjf" event={"ID":"38f5849e-24d7-4fb4-8c2b-14c748f61f03","Type":"ContainerDied","Data":"416035169cbc201d0c7a7193957000cf7ea61e2108093ad3fb246479d1bc69ab"} Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.521423 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.581270 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.624091 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99ttk\" (UniqueName: \"kubernetes.io/projected/38f5849e-24d7-4fb4-8c2b-14c748f61f03-kube-api-access-99ttk\") pod \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\" (UID: \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.624177 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f5849e-24d7-4fb4-8c2b-14c748f61f03-utilities\") pod \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\" (UID: \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.624219 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f5849e-24d7-4fb4-8c2b-14c748f61f03-catalog-content\") pod \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\" (UID: \"38f5849e-24d7-4fb4-8c2b-14c748f61f03\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.625424 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38f5849e-24d7-4fb4-8c2b-14c748f61f03-utilities" (OuterVolumeSpecName: "utilities") pod "38f5849e-24d7-4fb4-8c2b-14c748f61f03" (UID: "38f5849e-24d7-4fb4-8c2b-14c748f61f03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.629588 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f5849e-24d7-4fb4-8c2b-14c748f61f03-kube-api-access-99ttk" (OuterVolumeSpecName: "kube-api-access-99ttk") pod "38f5849e-24d7-4fb4-8c2b-14c748f61f03" (UID: "38f5849e-24d7-4fb4-8c2b-14c748f61f03"). InnerVolumeSpecName "kube-api-access-99ttk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.682950 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38f5849e-24d7-4fb4-8c2b-14c748f61f03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38f5849e-24d7-4fb4-8c2b-14c748f61f03" (UID: "38f5849e-24d7-4fb4-8c2b-14c748f61f03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.707372 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.714803 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.719550 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.727511 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99ttk\" (UniqueName: \"kubernetes.io/projected/38f5849e-24d7-4fb4-8c2b-14c748f61f03-kube-api-access-99ttk\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.727546 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f5849e-24d7-4fb4-8c2b-14c748f61f03-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.727585 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f5849e-24d7-4fb4-8c2b-14c748f61f03-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.732845 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.828810 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9mfv\" (UniqueName: \"kubernetes.io/projected/39e32a09-8172-443c-bd56-00a536a06de2-kube-api-access-v9mfv\") pod \"39e32a09-8172-443c-bd56-00a536a06de2\" (UID: \"39e32a09-8172-443c-bd56-00a536a06de2\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.829105 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c7001c-ca6f-43a9-b828-02697c5e581a-utilities\") pod \"04c7001c-ca6f-43a9-b828-02697c5e581a\" (UID: \"04c7001c-ca6f-43a9-b828-02697c5e581a\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.829282 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69663c02-7b8c-478d-8975-79fae4dbadea-catalog-content\") pod \"69663c02-7b8c-478d-8975-79fae4dbadea\" (UID: \"69663c02-7b8c-478d-8975-79fae4dbadea\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.829398 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39e32a09-8172-443c-bd56-00a536a06de2-marketplace-trusted-ca\") pod \"39e32a09-8172-443c-bd56-00a536a06de2\" (UID: \"39e32a09-8172-443c-bd56-00a536a06de2\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.829579 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69663c02-7b8c-478d-8975-79fae4dbadea-utilities\") pod \"69663c02-7b8c-478d-8975-79fae4dbadea\" (UID: \"69663c02-7b8c-478d-8975-79fae4dbadea\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.829691 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c7001c-ca6f-43a9-b828-02697c5e581a-catalog-content\") pod \"04c7001c-ca6f-43a9-b828-02697c5e581a\" (UID: \"04c7001c-ca6f-43a9-b828-02697c5e581a\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.829791 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04c7001c-ca6f-43a9-b828-02697c5e581a-utilities" (OuterVolumeSpecName: "utilities") pod "04c7001c-ca6f-43a9-b828-02697c5e581a" (UID: "04c7001c-ca6f-43a9-b828-02697c5e581a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.829844 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39e32a09-8172-443c-bd56-00a536a06de2-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "39e32a09-8172-443c-bd56-00a536a06de2" (UID: "39e32a09-8172-443c-bd56-00a536a06de2"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.829971 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-catalog-content\") pod \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\" (UID: \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.830156 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39e32a09-8172-443c-bd56-00a536a06de2-marketplace-operator-metrics\") pod \"39e32a09-8172-443c-bd56-00a536a06de2\" (UID: \"39e32a09-8172-443c-bd56-00a536a06de2\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.830259 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh8rt\" (UniqueName: \"kubernetes.io/projected/04c7001c-ca6f-43a9-b828-02697c5e581a-kube-api-access-zh8rt\") pod \"04c7001c-ca6f-43a9-b828-02697c5e581a\" (UID: \"04c7001c-ca6f-43a9-b828-02697c5e581a\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.830377 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpvgv\" (UniqueName: \"kubernetes.io/projected/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-kube-api-access-zpvgv\") pod \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\" (UID: \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.830506 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpbnn\" (UniqueName: \"kubernetes.io/projected/69663c02-7b8c-478d-8975-79fae4dbadea-kube-api-access-zpbnn\") pod \"69663c02-7b8c-478d-8975-79fae4dbadea\" (UID: \"69663c02-7b8c-478d-8975-79fae4dbadea\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.830532 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69663c02-7b8c-478d-8975-79fae4dbadea-utilities" (OuterVolumeSpecName: "utilities") pod "69663c02-7b8c-478d-8975-79fae4dbadea" (UID: "69663c02-7b8c-478d-8975-79fae4dbadea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.831392 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c7001c-ca6f-43a9-b828-02697c5e581a-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.831546 4675 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39e32a09-8172-443c-bd56-00a536a06de2-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.831647 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69663c02-7b8c-478d-8975-79fae4dbadea-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.833247 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e32a09-8172-443c-bd56-00a536a06de2-kube-api-access-v9mfv" (OuterVolumeSpecName: "kube-api-access-v9mfv") pod "39e32a09-8172-443c-bd56-00a536a06de2" (UID: "39e32a09-8172-443c-bd56-00a536a06de2"). InnerVolumeSpecName "kube-api-access-v9mfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.833356 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c7001c-ca6f-43a9-b828-02697c5e581a-kube-api-access-zh8rt" (OuterVolumeSpecName: "kube-api-access-zh8rt") pod "04c7001c-ca6f-43a9-b828-02697c5e581a" (UID: "04c7001c-ca6f-43a9-b828-02697c5e581a"). InnerVolumeSpecName "kube-api-access-zh8rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.833708 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39e32a09-8172-443c-bd56-00a536a06de2-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "39e32a09-8172-443c-bd56-00a536a06de2" (UID: "39e32a09-8172-443c-bd56-00a536a06de2"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.833869 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69663c02-7b8c-478d-8975-79fae4dbadea-kube-api-access-zpbnn" (OuterVolumeSpecName: "kube-api-access-zpbnn") pod "69663c02-7b8c-478d-8975-79fae4dbadea" (UID: "69663c02-7b8c-478d-8975-79fae4dbadea"). InnerVolumeSpecName "kube-api-access-zpbnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.834751 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-kube-api-access-zpvgv" (OuterVolumeSpecName: "kube-api-access-zpvgv") pod "8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" (UID: "8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c"). InnerVolumeSpecName "kube-api-access-zpvgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.851924 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04c7001c-ca6f-43a9-b828-02697c5e581a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04c7001c-ca6f-43a9-b828-02697c5e581a" (UID: "04c7001c-ca6f-43a9-b828-02697c5e581a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.886438 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" (UID: "8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.924147 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69663c02-7b8c-478d-8975-79fae4dbadea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69663c02-7b8c-478d-8975-79fae4dbadea" (UID: "69663c02-7b8c-478d-8975-79fae4dbadea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.932409 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-utilities\") pod \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\" (UID: \"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c\") " Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.932555 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpvgv\" (UniqueName: \"kubernetes.io/projected/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-kube-api-access-zpvgv\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.932568 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpbnn\" (UniqueName: \"kubernetes.io/projected/69663c02-7b8c-478d-8975-79fae4dbadea-kube-api-access-zpbnn\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.932577 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9mfv\" (UniqueName: \"kubernetes.io/projected/39e32a09-8172-443c-bd56-00a536a06de2-kube-api-access-v9mfv\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.932586 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69663c02-7b8c-478d-8975-79fae4dbadea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.933659 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c7001c-ca6f-43a9-b828-02697c5e581a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.933704 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.933716 4675 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39e32a09-8172-443c-bd56-00a536a06de2-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.933726 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh8rt\" (UniqueName: \"kubernetes.io/projected/04c7001c-ca6f-43a9-b828-02697c5e581a-kube-api-access-zh8rt\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.934361 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-utilities" (OuterVolumeSpecName: "utilities") pod "8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" (UID: "8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:38:39 crc kubenswrapper[4675]: I1121 13:38:39.936671 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6q9sj"] Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.034516 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.501386 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" event={"ID":"39e32a09-8172-443c-bd56-00a536a06de2","Type":"ContainerDied","Data":"1853a90637511afc095925eb357d0e47f8ff6acac91e1090a661e65eaff455d1"} Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.501470 4675 scope.go:117] "RemoveContainer" containerID="9573c1d363c480b9b13406b9b975bfc8522f3c8089f7e4612ef0dd36d80a5e20" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.501555 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r5svl" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.503395 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj6fs" event={"ID":"8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c","Type":"ContainerDied","Data":"cb2f5288fc60c3d7b1f0a42b912533e8a85722b6c1d269eb696a6ade3e5d179d"} Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.503422 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj6fs" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.505228 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" event={"ID":"6a21daba-95a0-4f20-91b5-de4dc44aa0b1","Type":"ContainerStarted","Data":"63cbbe3b7ee73a845a4a0ecc3fe022fcba0fa9729b3abede67d337c15fa04a5f"} Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.505277 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" event={"ID":"6a21daba-95a0-4f20-91b5-de4dc44aa0b1","Type":"ContainerStarted","Data":"e1283a2316e61371b540b6fba21a6e99a887e17c7d4865d14ab5593cff4a2a8d"} Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.505764 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.509726 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjgjf" event={"ID":"38f5849e-24d7-4fb4-8c2b-14c748f61f03","Type":"ContainerDied","Data":"880233d818e7d583c24c155fe67ef7a9c949819dfae39f5faf19d8c1ef468271"} Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.509839 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjgjf" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.512306 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.514330 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjkqd" event={"ID":"69663c02-7b8c-478d-8975-79fae4dbadea","Type":"ContainerDied","Data":"eb142fb7753dda93d2d591b46ba843b932c84d0de9fb81c22db52a0a850369b4"} Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.514354 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjkqd" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.515355 4675 scope.go:117] "RemoveContainer" containerID="6891e862bb596a9f02ee9b8713490b6454f513f645abfdc9e81250f3e694f5f0" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.517589 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztth5" event={"ID":"04c7001c-ca6f-43a9-b828-02697c5e581a","Type":"ContainerDied","Data":"7678d6aef048928c6b243701ba34b18678d40a09d93b71755fdabb902f53f3e8"} Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.517618 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztth5" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.532861 4675 scope.go:117] "RemoveContainer" containerID="69b5b8bec82d771010389112c944076ab871275a43c352e2fa0b74a92742c802" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.532960 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6q9sj" podStartSLOduration=1.53293996 podStartE2EDuration="1.53293996s" podCreationTimestamp="2025-11-21 13:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:38:40.530914415 +0000 UTC m=+397.257329142" watchObservedRunningTime="2025-11-21 13:38:40.53293996 +0000 UTC m=+397.259354687" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.548801 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r5svl"] Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.551127 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r5svl"] Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.566194 4675 scope.go:117] "RemoveContainer" containerID="dc9bca587bf032e94e7edc0e5cdf3b4c4a5ca26fe57e557c19ea47d7f8d4823e" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.569303 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rj6fs"] Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.576199 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rj6fs"] Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.626896 4675 scope.go:117] "RemoveContainer" containerID="416035169cbc201d0c7a7193957000cf7ea61e2108093ad3fb246479d1bc69ab" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.636939 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztth5"] Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.639397 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztth5"] Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.643110 4675 scope.go:117] "RemoveContainer" containerID="7bf021d5b6c79287d9684e2424ba0619342bbc5bc7ac90a4e6d53d1bd36acd45" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.648320 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjkqd"] Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.650778 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rjkqd"] Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.660083 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fjgjf"] Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.662298 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fjgjf"] Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.670804 4675 scope.go:117] "RemoveContainer" containerID="54a48833a0023ff9fc3ae91970fb51cee5f8b0e5d04b168c40916aad766cc2fd" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.689900 4675 scope.go:117] "RemoveContainer" containerID="a0cbb60ba6223e1220ea7e4035058e5c622690a46b4289bfbd5ab306e9d81ad5" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.706968 4675 scope.go:117] "RemoveContainer" containerID="31ab96fc4c703590459e3ce01ded75259259fd87079bf7b146248c9bddd1af78" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.719201 4675 scope.go:117] "RemoveContainer" containerID="e565b363e3c31356cc729b3a7983a4aa15b836cec41ff444276253cc328085fa" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.729792 4675 scope.go:117] "RemoveContainer" containerID="282b033aa281d69b71955d7ec700f520a46cfd76dd6f321b9570f198298337ad" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.740953 4675 scope.go:117] "RemoveContainer" containerID="677707809afa28f49424fcadb3d0868a6d4005248da74c7d9547d5b994336e84" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.752672 4675 scope.go:117] "RemoveContainer" containerID="a50049c466181bc9ff925b8d704917ceb11fd73cbd4b0000953506d1bdc1495c" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.855439 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04c7001c-ca6f-43a9-b828-02697c5e581a" path="/var/lib/kubelet/pods/04c7001c-ca6f-43a9-b828-02697c5e581a/volumes" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.856366 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38f5849e-24d7-4fb4-8c2b-14c748f61f03" path="/var/lib/kubelet/pods/38f5849e-24d7-4fb4-8c2b-14c748f61f03/volumes" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.857145 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39e32a09-8172-443c-bd56-00a536a06de2" path="/var/lib/kubelet/pods/39e32a09-8172-443c-bd56-00a536a06de2/volumes" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.857738 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69663c02-7b8c-478d-8975-79fae4dbadea" path="/var/lib/kubelet/pods/69663c02-7b8c-478d-8975-79fae4dbadea/volumes" Nov 21 13:38:40 crc kubenswrapper[4675]: I1121 13:38:40.858489 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" path="/var/lib/kubelet/pods/8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c/volumes" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.171890 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pk6br"] Nov 21 13:38:41 crc kubenswrapper[4675]: E1121 13:38:41.172126 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69663c02-7b8c-478d-8975-79fae4dbadea" containerName="registry-server" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172140 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="69663c02-7b8c-478d-8975-79fae4dbadea" containerName="registry-server" Nov 21 13:38:41 crc kubenswrapper[4675]: E1121 13:38:41.172153 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" containerName="registry-server" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172161 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" containerName="registry-server" Nov 21 13:38:41 crc kubenswrapper[4675]: E1121 13:38:41.172170 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c7001c-ca6f-43a9-b828-02697c5e581a" containerName="extract-utilities" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172177 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c7001c-ca6f-43a9-b828-02697c5e581a" containerName="extract-utilities" Nov 21 13:38:41 crc kubenswrapper[4675]: E1121 13:38:41.172185 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f5849e-24d7-4fb4-8c2b-14c748f61f03" containerName="extract-content" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172193 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f5849e-24d7-4fb4-8c2b-14c748f61f03" containerName="extract-content" Nov 21 13:38:41 crc kubenswrapper[4675]: E1121 13:38:41.172201 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f5849e-24d7-4fb4-8c2b-14c748f61f03" containerName="extract-utilities" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172208 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f5849e-24d7-4fb4-8c2b-14c748f61f03" containerName="extract-utilities" Nov 21 13:38:41 crc kubenswrapper[4675]: E1121 13:38:41.172219 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c7001c-ca6f-43a9-b828-02697c5e581a" containerName="registry-server" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172226 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c7001c-ca6f-43a9-b828-02697c5e581a" containerName="registry-server" Nov 21 13:38:41 crc kubenswrapper[4675]: E1121 13:38:41.172235 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e32a09-8172-443c-bd56-00a536a06de2" containerName="marketplace-operator" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172241 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e32a09-8172-443c-bd56-00a536a06de2" containerName="marketplace-operator" Nov 21 13:38:41 crc kubenswrapper[4675]: E1121 13:38:41.172252 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69663c02-7b8c-478d-8975-79fae4dbadea" containerName="extract-utilities" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172259 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="69663c02-7b8c-478d-8975-79fae4dbadea" containerName="extract-utilities" Nov 21 13:38:41 crc kubenswrapper[4675]: E1121 13:38:41.172270 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" containerName="extract-content" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172277 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" containerName="extract-content" Nov 21 13:38:41 crc kubenswrapper[4675]: E1121 13:38:41.172288 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69663c02-7b8c-478d-8975-79fae4dbadea" containerName="extract-content" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172295 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="69663c02-7b8c-478d-8975-79fae4dbadea" containerName="extract-content" Nov 21 13:38:41 crc kubenswrapper[4675]: E1121 13:38:41.172306 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" containerName="extract-utilities" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172313 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" containerName="extract-utilities" Nov 21 13:38:41 crc kubenswrapper[4675]: E1121 13:38:41.172320 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c7001c-ca6f-43a9-b828-02697c5e581a" containerName="extract-content" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172327 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c7001c-ca6f-43a9-b828-02697c5e581a" containerName="extract-content" Nov 21 13:38:41 crc kubenswrapper[4675]: E1121 13:38:41.172340 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f5849e-24d7-4fb4-8c2b-14c748f61f03" containerName="registry-server" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172347 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f5849e-24d7-4fb4-8c2b-14c748f61f03" containerName="registry-server" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172439 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c7001c-ca6f-43a9-b828-02697c5e581a" containerName="registry-server" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172452 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e32a09-8172-443c-bd56-00a536a06de2" containerName="marketplace-operator" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172464 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9d0dc6-7764-48e7-9e6e-aa1bfbd3358c" containerName="registry-server" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172475 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="69663c02-7b8c-478d-8975-79fae4dbadea" containerName="registry-server" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.172486 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="38f5849e-24d7-4fb4-8c2b-14c748f61f03" containerName="registry-server" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.173235 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.179466 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.183086 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pk6br"] Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.354916 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d54f5b5-d5db-4104-9f8b-072086f8f9a4-utilities\") pod \"redhat-marketplace-pk6br\" (UID: \"1d54f5b5-d5db-4104-9f8b-072086f8f9a4\") " pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.354992 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d54f5b5-d5db-4104-9f8b-072086f8f9a4-catalog-content\") pod \"redhat-marketplace-pk6br\" (UID: \"1d54f5b5-d5db-4104-9f8b-072086f8f9a4\") " pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.355135 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjsqw\" (UniqueName: \"kubernetes.io/projected/1d54f5b5-d5db-4104-9f8b-072086f8f9a4-kube-api-access-hjsqw\") pod \"redhat-marketplace-pk6br\" (UID: \"1d54f5b5-d5db-4104-9f8b-072086f8f9a4\") " pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.456165 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d54f5b5-d5db-4104-9f8b-072086f8f9a4-catalog-content\") pod \"redhat-marketplace-pk6br\" (UID: \"1d54f5b5-d5db-4104-9f8b-072086f8f9a4\") " pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.456221 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjsqw\" (UniqueName: \"kubernetes.io/projected/1d54f5b5-d5db-4104-9f8b-072086f8f9a4-kube-api-access-hjsqw\") pod \"redhat-marketplace-pk6br\" (UID: \"1d54f5b5-d5db-4104-9f8b-072086f8f9a4\") " pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.456286 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d54f5b5-d5db-4104-9f8b-072086f8f9a4-utilities\") pod \"redhat-marketplace-pk6br\" (UID: \"1d54f5b5-d5db-4104-9f8b-072086f8f9a4\") " pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.457471 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d54f5b5-d5db-4104-9f8b-072086f8f9a4-utilities\") pod \"redhat-marketplace-pk6br\" (UID: \"1d54f5b5-d5db-4104-9f8b-072086f8f9a4\") " pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.457541 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d54f5b5-d5db-4104-9f8b-072086f8f9a4-catalog-content\") pod \"redhat-marketplace-pk6br\" (UID: \"1d54f5b5-d5db-4104-9f8b-072086f8f9a4\") " pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.477180 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjsqw\" (UniqueName: \"kubernetes.io/projected/1d54f5b5-d5db-4104-9f8b-072086f8f9a4-kube-api-access-hjsqw\") pod \"redhat-marketplace-pk6br\" (UID: \"1d54f5b5-d5db-4104-9f8b-072086f8f9a4\") " pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.496437 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.696209 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pk6br"] Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.771177 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zm72j"] Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.774465 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.777543 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.780255 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zm72j"] Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.866225 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnpxc\" (UniqueName: \"kubernetes.io/projected/3f6b3f8e-0776-47f2-bbe4-ed0d6af49813-kube-api-access-dnpxc\") pod \"redhat-operators-zm72j\" (UID: \"3f6b3f8e-0776-47f2-bbe4-ed0d6af49813\") " pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.866297 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f6b3f8e-0776-47f2-bbe4-ed0d6af49813-catalog-content\") pod \"redhat-operators-zm72j\" (UID: \"3f6b3f8e-0776-47f2-bbe4-ed0d6af49813\") " pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.866466 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f6b3f8e-0776-47f2-bbe4-ed0d6af49813-utilities\") pod \"redhat-operators-zm72j\" (UID: \"3f6b3f8e-0776-47f2-bbe4-ed0d6af49813\") " pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.967222 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnpxc\" (UniqueName: \"kubernetes.io/projected/3f6b3f8e-0776-47f2-bbe4-ed0d6af49813-kube-api-access-dnpxc\") pod \"redhat-operators-zm72j\" (UID: \"3f6b3f8e-0776-47f2-bbe4-ed0d6af49813\") " pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.967287 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f6b3f8e-0776-47f2-bbe4-ed0d6af49813-catalog-content\") pod \"redhat-operators-zm72j\" (UID: \"3f6b3f8e-0776-47f2-bbe4-ed0d6af49813\") " pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.967312 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f6b3f8e-0776-47f2-bbe4-ed0d6af49813-utilities\") pod \"redhat-operators-zm72j\" (UID: \"3f6b3f8e-0776-47f2-bbe4-ed0d6af49813\") " pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.968164 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f6b3f8e-0776-47f2-bbe4-ed0d6af49813-utilities\") pod \"redhat-operators-zm72j\" (UID: \"3f6b3f8e-0776-47f2-bbe4-ed0d6af49813\") " pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.968167 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f6b3f8e-0776-47f2-bbe4-ed0d6af49813-catalog-content\") pod \"redhat-operators-zm72j\" (UID: \"3f6b3f8e-0776-47f2-bbe4-ed0d6af49813\") " pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:41 crc kubenswrapper[4675]: I1121 13:38:41.986498 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnpxc\" (UniqueName: \"kubernetes.io/projected/3f6b3f8e-0776-47f2-bbe4-ed0d6af49813-kube-api-access-dnpxc\") pod \"redhat-operators-zm72j\" (UID: \"3f6b3f8e-0776-47f2-bbe4-ed0d6af49813\") " pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:42 crc kubenswrapper[4675]: I1121 13:38:42.103018 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:42 crc kubenswrapper[4675]: I1121 13:38:42.306953 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zm72j"] Nov 21 13:38:42 crc kubenswrapper[4675]: W1121 13:38:42.311644 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f6b3f8e_0776_47f2_bbe4_ed0d6af49813.slice/crio-09f34d525ba44857e56d90bef42ef30fe16a4c2bdaec4113f3a825c3327280e1 WatchSource:0}: Error finding container 09f34d525ba44857e56d90bef42ef30fe16a4c2bdaec4113f3a825c3327280e1: Status 404 returned error can't find the container with id 09f34d525ba44857e56d90bef42ef30fe16a4c2bdaec4113f3a825c3327280e1 Nov 21 13:38:42 crc kubenswrapper[4675]: I1121 13:38:42.541853 4675 generic.go:334] "Generic (PLEG): container finished" podID="1d54f5b5-d5db-4104-9f8b-072086f8f9a4" containerID="cef46ec043cf48e77fbd12511184bc9ca18db665249a5f50fef0a8742f5d0547" exitCode=0 Nov 21 13:38:42 crc kubenswrapper[4675]: I1121 13:38:42.541934 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pk6br" event={"ID":"1d54f5b5-d5db-4104-9f8b-072086f8f9a4","Type":"ContainerDied","Data":"cef46ec043cf48e77fbd12511184bc9ca18db665249a5f50fef0a8742f5d0547"} Nov 21 13:38:42 crc kubenswrapper[4675]: I1121 13:38:42.541962 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pk6br" event={"ID":"1d54f5b5-d5db-4104-9f8b-072086f8f9a4","Type":"ContainerStarted","Data":"3a2fe6d1291b92198e696d1951e4adb90b94943cdece1422775b539efc73b8e8"} Nov 21 13:38:42 crc kubenswrapper[4675]: I1121 13:38:42.543457 4675 generic.go:334] "Generic (PLEG): container finished" podID="3f6b3f8e-0776-47f2-bbe4-ed0d6af49813" containerID="8dc964c7339aa385b4b50125947c26a562373a86db1b8a4c992c3c47ea53dadd" exitCode=0 Nov 21 13:38:42 crc kubenswrapper[4675]: I1121 13:38:42.543518 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm72j" event={"ID":"3f6b3f8e-0776-47f2-bbe4-ed0d6af49813","Type":"ContainerDied","Data":"8dc964c7339aa385b4b50125947c26a562373a86db1b8a4c992c3c47ea53dadd"} Nov 21 13:38:42 crc kubenswrapper[4675]: I1121 13:38:42.543539 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm72j" event={"ID":"3f6b3f8e-0776-47f2-bbe4-ed0d6af49813","Type":"ContainerStarted","Data":"09f34d525ba44857e56d90bef42ef30fe16a4c2bdaec4113f3a825c3327280e1"} Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.552173 4675 generic.go:334] "Generic (PLEG): container finished" podID="1d54f5b5-d5db-4104-9f8b-072086f8f9a4" containerID="dc7014e001d0da20c8c3afa03cf73cff4ae266f36e870bd8e667dfbba242daf5" exitCode=0 Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.552230 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pk6br" event={"ID":"1d54f5b5-d5db-4104-9f8b-072086f8f9a4","Type":"ContainerDied","Data":"dc7014e001d0da20c8c3afa03cf73cff4ae266f36e870bd8e667dfbba242daf5"} Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.556489 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm72j" event={"ID":"3f6b3f8e-0776-47f2-bbe4-ed0d6af49813","Type":"ContainerStarted","Data":"5801dd21dbd4434437f19d83fcd6e1e7b03bbd22add52c7e7e45c1ae877480e1"} Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.577811 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tmbz4"] Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.578925 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.586579 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.592231 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tmbz4"] Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.592845 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7g99\" (UniqueName: \"kubernetes.io/projected/da963da7-38b3-45bf-88ba-54b6e6b9a58f-kube-api-access-v7g99\") pod \"certified-operators-tmbz4\" (UID: \"da963da7-38b3-45bf-88ba-54b6e6b9a58f\") " pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.592947 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da963da7-38b3-45bf-88ba-54b6e6b9a58f-catalog-content\") pod \"certified-operators-tmbz4\" (UID: \"da963da7-38b3-45bf-88ba-54b6e6b9a58f\") " pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.592987 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da963da7-38b3-45bf-88ba-54b6e6b9a58f-utilities\") pod \"certified-operators-tmbz4\" (UID: \"da963da7-38b3-45bf-88ba-54b6e6b9a58f\") " pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.694040 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da963da7-38b3-45bf-88ba-54b6e6b9a58f-catalog-content\") pod \"certified-operators-tmbz4\" (UID: \"da963da7-38b3-45bf-88ba-54b6e6b9a58f\") " pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.694096 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da963da7-38b3-45bf-88ba-54b6e6b9a58f-utilities\") pod \"certified-operators-tmbz4\" (UID: \"da963da7-38b3-45bf-88ba-54b6e6b9a58f\") " pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.694132 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7g99\" (UniqueName: \"kubernetes.io/projected/da963da7-38b3-45bf-88ba-54b6e6b9a58f-kube-api-access-v7g99\") pod \"certified-operators-tmbz4\" (UID: \"da963da7-38b3-45bf-88ba-54b6e6b9a58f\") " pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.694483 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da963da7-38b3-45bf-88ba-54b6e6b9a58f-catalog-content\") pod \"certified-operators-tmbz4\" (UID: \"da963da7-38b3-45bf-88ba-54b6e6b9a58f\") " pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.694655 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da963da7-38b3-45bf-88ba-54b6e6b9a58f-utilities\") pod \"certified-operators-tmbz4\" (UID: \"da963da7-38b3-45bf-88ba-54b6e6b9a58f\") " pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.716295 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7g99\" (UniqueName: \"kubernetes.io/projected/da963da7-38b3-45bf-88ba-54b6e6b9a58f-kube-api-access-v7g99\") pod \"certified-operators-tmbz4\" (UID: \"da963da7-38b3-45bf-88ba-54b6e6b9a58f\") " pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:43 crc kubenswrapper[4675]: I1121 13:38:43.907028 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.167540 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mmt5k"] Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.169197 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.171587 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.189468 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mmt5k"] Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.200032 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rn44\" (UniqueName: \"kubernetes.io/projected/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-kube-api-access-6rn44\") pod \"community-operators-mmt5k\" (UID: \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\") " pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.200108 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-catalog-content\") pod \"community-operators-mmt5k\" (UID: \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\") " pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.200169 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-utilities\") pod \"community-operators-mmt5k\" (UID: \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\") " pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.301114 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rn44\" (UniqueName: \"kubernetes.io/projected/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-kube-api-access-6rn44\") pod \"community-operators-mmt5k\" (UID: \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\") " pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.301179 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-catalog-content\") pod \"community-operators-mmt5k\" (UID: \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\") " pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.301228 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-utilities\") pod \"community-operators-mmt5k\" (UID: \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\") " pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.301715 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-utilities\") pod \"community-operators-mmt5k\" (UID: \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\") " pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.301768 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-catalog-content\") pod \"community-operators-mmt5k\" (UID: \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\") " pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.320675 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rn44\" (UniqueName: \"kubernetes.io/projected/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-kube-api-access-6rn44\") pod \"community-operators-mmt5k\" (UID: \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\") " pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.341157 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tmbz4"] Nov 21 13:38:44 crc kubenswrapper[4675]: W1121 13:38:44.351563 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda963da7_38b3_45bf_88ba_54b6e6b9a58f.slice/crio-f4c32dc6e0e610961cec57c964b9267a3ec9953d64fb3f5f18bef7c8113a1f85 WatchSource:0}: Error finding container f4c32dc6e0e610961cec57c964b9267a3ec9953d64fb3f5f18bef7c8113a1f85: Status 404 returned error can't find the container with id f4c32dc6e0e610961cec57c964b9267a3ec9953d64fb3f5f18bef7c8113a1f85 Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.488425 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.572294 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pk6br" event={"ID":"1d54f5b5-d5db-4104-9f8b-072086f8f9a4","Type":"ContainerStarted","Data":"af392a467b0e26a1e5d0ea85001fd31d482cdb9e592cebb50489c5adab31720c"} Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.575242 4675 generic.go:334] "Generic (PLEG): container finished" podID="3f6b3f8e-0776-47f2-bbe4-ed0d6af49813" containerID="5801dd21dbd4434437f19d83fcd6e1e7b03bbd22add52c7e7e45c1ae877480e1" exitCode=0 Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.575322 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm72j" event={"ID":"3f6b3f8e-0776-47f2-bbe4-ed0d6af49813","Type":"ContainerDied","Data":"5801dd21dbd4434437f19d83fcd6e1e7b03bbd22add52c7e7e45c1ae877480e1"} Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.577242 4675 generic.go:334] "Generic (PLEG): container finished" podID="da963da7-38b3-45bf-88ba-54b6e6b9a58f" containerID="12f4a39fbbf199edf42f67b3bfbb5ce346ad2115a2c2e3f621bc0ffdd6243dcf" exitCode=0 Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.577277 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmbz4" event={"ID":"da963da7-38b3-45bf-88ba-54b6e6b9a58f","Type":"ContainerDied","Data":"12f4a39fbbf199edf42f67b3bfbb5ce346ad2115a2c2e3f621bc0ffdd6243dcf"} Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.577297 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmbz4" event={"ID":"da963da7-38b3-45bf-88ba-54b6e6b9a58f","Type":"ContainerStarted","Data":"f4c32dc6e0e610961cec57c964b9267a3ec9953d64fb3f5f18bef7c8113a1f85"} Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.592211 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pk6br" podStartSLOduration=2.192927287 podStartE2EDuration="3.592193059s" podCreationTimestamp="2025-11-21 13:38:41 +0000 UTC" firstStartedPulling="2025-11-21 13:38:42.543125751 +0000 UTC m=+399.269540478" lastFinishedPulling="2025-11-21 13:38:43.942391533 +0000 UTC m=+400.668806250" observedRunningTime="2025-11-21 13:38:44.58962163 +0000 UTC m=+401.316036357" watchObservedRunningTime="2025-11-21 13:38:44.592193059 +0000 UTC m=+401.318607786" Nov 21 13:38:44 crc kubenswrapper[4675]: I1121 13:38:44.683517 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mmt5k"] Nov 21 13:38:45 crc kubenswrapper[4675]: I1121 13:38:45.584319 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmbz4" event={"ID":"da963da7-38b3-45bf-88ba-54b6e6b9a58f","Type":"ContainerStarted","Data":"3543853d68404798c1347072f056c85c823fcf2946ac47cd9c8bc5375d55692a"} Nov 21 13:38:45 crc kubenswrapper[4675]: I1121 13:38:45.586644 4675 generic.go:334] "Generic (PLEG): container finished" podID="1e9a8b80-1762-4847-a947-9a7d1ab21b9e" containerID="685ce698a5fcaa4eface688a696d0f4a90b0919fbb592ec6d561170f1f521a1a" exitCode=0 Nov 21 13:38:45 crc kubenswrapper[4675]: I1121 13:38:45.586704 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmt5k" event={"ID":"1e9a8b80-1762-4847-a947-9a7d1ab21b9e","Type":"ContainerDied","Data":"685ce698a5fcaa4eface688a696d0f4a90b0919fbb592ec6d561170f1f521a1a"} Nov 21 13:38:45 crc kubenswrapper[4675]: I1121 13:38:45.586724 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmt5k" event={"ID":"1e9a8b80-1762-4847-a947-9a7d1ab21b9e","Type":"ContainerStarted","Data":"23a92a708ac679df9efff5ea2610109822429b10c58423ef2dd777dc8e46ced5"} Nov 21 13:38:45 crc kubenswrapper[4675]: I1121 13:38:45.594709 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm72j" event={"ID":"3f6b3f8e-0776-47f2-bbe4-ed0d6af49813","Type":"ContainerStarted","Data":"8f65e695344d4bc066cf3c2816ca1ea5c5ac32a420f950ec2379b9afe760d8c0"} Nov 21 13:38:45 crc kubenswrapper[4675]: I1121 13:38:45.641229 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zm72j" podStartSLOduration=2.115451493 podStartE2EDuration="4.64120727s" podCreationTimestamp="2025-11-21 13:38:41 +0000 UTC" firstStartedPulling="2025-11-21 13:38:42.544709313 +0000 UTC m=+399.271124040" lastFinishedPulling="2025-11-21 13:38:45.07046509 +0000 UTC m=+401.796879817" observedRunningTime="2025-11-21 13:38:45.638893018 +0000 UTC m=+402.365307745" watchObservedRunningTime="2025-11-21 13:38:45.64120727 +0000 UTC m=+402.367621997" Nov 21 13:38:46 crc kubenswrapper[4675]: I1121 13:38:46.136821 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:38:46 crc kubenswrapper[4675]: I1121 13:38:46.136904 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:38:46 crc kubenswrapper[4675]: I1121 13:38:46.602105 4675 generic.go:334] "Generic (PLEG): container finished" podID="da963da7-38b3-45bf-88ba-54b6e6b9a58f" containerID="3543853d68404798c1347072f056c85c823fcf2946ac47cd9c8bc5375d55692a" exitCode=0 Nov 21 13:38:46 crc kubenswrapper[4675]: I1121 13:38:46.602174 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmbz4" event={"ID":"da963da7-38b3-45bf-88ba-54b6e6b9a58f","Type":"ContainerDied","Data":"3543853d68404798c1347072f056c85c823fcf2946ac47cd9c8bc5375d55692a"} Nov 21 13:38:47 crc kubenswrapper[4675]: I1121 13:38:47.610101 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmbz4" event={"ID":"da963da7-38b3-45bf-88ba-54b6e6b9a58f","Type":"ContainerStarted","Data":"182afeaf6019264640a24c9db87b51a99bd9cc4ad7fa460c0217588c101441e8"} Nov 21 13:38:47 crc kubenswrapper[4675]: I1121 13:38:47.613469 4675 generic.go:334] "Generic (PLEG): container finished" podID="1e9a8b80-1762-4847-a947-9a7d1ab21b9e" containerID="8253c8aa7978876d08c287f55eb1c853ab8851bffc79371f651378f2847b82eb" exitCode=0 Nov 21 13:38:47 crc kubenswrapper[4675]: I1121 13:38:47.613510 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmt5k" event={"ID":"1e9a8b80-1762-4847-a947-9a7d1ab21b9e","Type":"ContainerDied","Data":"8253c8aa7978876d08c287f55eb1c853ab8851bffc79371f651378f2847b82eb"} Nov 21 13:38:47 crc kubenswrapper[4675]: I1121 13:38:47.628026 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tmbz4" podStartSLOduration=2.113781586 podStartE2EDuration="4.628009783s" podCreationTimestamp="2025-11-21 13:38:43 +0000 UTC" firstStartedPulling="2025-11-21 13:38:44.578685235 +0000 UTC m=+401.305099962" lastFinishedPulling="2025-11-21 13:38:47.092913432 +0000 UTC m=+403.819328159" observedRunningTime="2025-11-21 13:38:47.627402637 +0000 UTC m=+404.353817364" watchObservedRunningTime="2025-11-21 13:38:47.628009783 +0000 UTC m=+404.354424510" Nov 21 13:38:48 crc kubenswrapper[4675]: I1121 13:38:48.620740 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmt5k" event={"ID":"1e9a8b80-1762-4847-a947-9a7d1ab21b9e","Type":"ContainerStarted","Data":"ea21d41fb79a4708b31fa83850a220fcdff20bd95991a425c0e22417a5d0a2e5"} Nov 21 13:38:48 crc kubenswrapper[4675]: I1121 13:38:48.642636 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mmt5k" podStartSLOduration=1.927977942 podStartE2EDuration="4.642611889s" podCreationTimestamp="2025-11-21 13:38:44 +0000 UTC" firstStartedPulling="2025-11-21 13:38:45.591183495 +0000 UTC m=+402.317598222" lastFinishedPulling="2025-11-21 13:38:48.305817442 +0000 UTC m=+405.032232169" observedRunningTime="2025-11-21 13:38:48.642585088 +0000 UTC m=+405.368999825" watchObservedRunningTime="2025-11-21 13:38:48.642611889 +0000 UTC m=+405.369026616" Nov 21 13:38:51 crc kubenswrapper[4675]: I1121 13:38:51.497347 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:51 crc kubenswrapper[4675]: I1121 13:38:51.497906 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:51 crc kubenswrapper[4675]: I1121 13:38:51.537035 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:51 crc kubenswrapper[4675]: I1121 13:38:51.681244 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pk6br" Nov 21 13:38:52 crc kubenswrapper[4675]: I1121 13:38:52.103490 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:52 crc kubenswrapper[4675]: I1121 13:38:52.103555 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:52 crc kubenswrapper[4675]: I1121 13:38:52.146018 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:52 crc kubenswrapper[4675]: I1121 13:38:52.679393 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zm72j" Nov 21 13:38:53 crc kubenswrapper[4675]: I1121 13:38:53.907341 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:53 crc kubenswrapper[4675]: I1121 13:38:53.907405 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:53 crc kubenswrapper[4675]: I1121 13:38:53.946710 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:54 crc kubenswrapper[4675]: I1121 13:38:54.488904 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:38:54 crc kubenswrapper[4675]: I1121 13:38:54.488946 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:38:54 crc kubenswrapper[4675]: I1121 13:38:54.534791 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:38:54 crc kubenswrapper[4675]: I1121 13:38:54.688229 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tmbz4" Nov 21 13:38:54 crc kubenswrapper[4675]: I1121 13:38:54.691294 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mmt5k" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.185357 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx"] Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.187679 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.189574 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.189752 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.189752 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.193549 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.193946 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.206714 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx"] Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.225412 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9zbb\" (UniqueName: \"kubernetes.io/projected/51d375b1-7ede-4bf8-a9a4-1065f72539b0-kube-api-access-z9zbb\") pod \"cluster-monitoring-operator-6d5b84845-w7skx\" (UID: \"51d375b1-7ede-4bf8-a9a4-1065f72539b0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.225456 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/51d375b1-7ede-4bf8-a9a4-1065f72539b0-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-w7skx\" (UID: \"51d375b1-7ede-4bf8-a9a4-1065f72539b0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.225479 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/51d375b1-7ede-4bf8-a9a4-1065f72539b0-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-w7skx\" (UID: \"51d375b1-7ede-4bf8-a9a4-1065f72539b0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.326393 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9zbb\" (UniqueName: \"kubernetes.io/projected/51d375b1-7ede-4bf8-a9a4-1065f72539b0-kube-api-access-z9zbb\") pod \"cluster-monitoring-operator-6d5b84845-w7skx\" (UID: \"51d375b1-7ede-4bf8-a9a4-1065f72539b0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.326471 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/51d375b1-7ede-4bf8-a9a4-1065f72539b0-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-w7skx\" (UID: \"51d375b1-7ede-4bf8-a9a4-1065f72539b0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.326509 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/51d375b1-7ede-4bf8-a9a4-1065f72539b0-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-w7skx\" (UID: \"51d375b1-7ede-4bf8-a9a4-1065f72539b0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.329605 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/51d375b1-7ede-4bf8-a9a4-1065f72539b0-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-w7skx\" (UID: \"51d375b1-7ede-4bf8-a9a4-1065f72539b0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.334864 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/51d375b1-7ede-4bf8-a9a4-1065f72539b0-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-w7skx\" (UID: \"51d375b1-7ede-4bf8-a9a4-1065f72539b0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.353280 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9zbb\" (UniqueName: \"kubernetes.io/projected/51d375b1-7ede-4bf8-a9a4-1065f72539b0-kube-api-access-z9zbb\") pod \"cluster-monitoring-operator-6d5b84845-w7skx\" (UID: \"51d375b1-7ede-4bf8-a9a4-1065f72539b0\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.506784 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx" Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.678197 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx"] Nov 21 13:39:09 crc kubenswrapper[4675]: I1121 13:39:09.731555 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx" event={"ID":"51d375b1-7ede-4bf8-a9a4-1065f72539b0","Type":"ContainerStarted","Data":"7dadc246607d7c30a428e95d1fe6de47787d5a0b4fb7734eb5bda6f9a58c0e53"} Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.267219 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-99v9q"] Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.268711 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.285616 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-99v9q"] Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.288727 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.289053 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea868a51-0b65-4a83-8a83-d522340127b8-trusted-ca\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.289207 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b46qm\" (UniqueName: \"kubernetes.io/projected/ea868a51-0b65-4a83-8a83-d522340127b8-kube-api-access-b46qm\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.289338 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea868a51-0b65-4a83-8a83-d522340127b8-bound-sa-token\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.289455 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea868a51-0b65-4a83-8a83-d522340127b8-registry-certificates\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.289590 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea868a51-0b65-4a83-8a83-d522340127b8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.289745 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea868a51-0b65-4a83-8a83-d522340127b8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.289908 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea868a51-0b65-4a83-8a83-d522340127b8-registry-tls\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.349511 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.384822 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lf9f6"] Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.385624 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lf9f6" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.387472 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.388102 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-svmtm" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.390657 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea868a51-0b65-4a83-8a83-d522340127b8-trusted-ca\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.390710 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b46qm\" (UniqueName: \"kubernetes.io/projected/ea868a51-0b65-4a83-8a83-d522340127b8-kube-api-access-b46qm\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.390743 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea868a51-0b65-4a83-8a83-d522340127b8-bound-sa-token\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.390766 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea868a51-0b65-4a83-8a83-d522340127b8-registry-certificates\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.390787 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea868a51-0b65-4a83-8a83-d522340127b8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.390822 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea868a51-0b65-4a83-8a83-d522340127b8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.390862 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea868a51-0b65-4a83-8a83-d522340127b8-registry-tls\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.391710 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ea868a51-0b65-4a83-8a83-d522340127b8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.391763 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea868a51-0b65-4a83-8a83-d522340127b8-trusted-ca\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.392259 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ea868a51-0b65-4a83-8a83-d522340127b8-registry-certificates\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.396764 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ea868a51-0b65-4a83-8a83-d522340127b8-registry-tls\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.398640 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ea868a51-0b65-4a83-8a83-d522340127b8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.400565 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lf9f6"] Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.411185 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b46qm\" (UniqueName: \"kubernetes.io/projected/ea868a51-0b65-4a83-8a83-d522340127b8-kube-api-access-b46qm\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.412437 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea868a51-0b65-4a83-8a83-d522340127b8-bound-sa-token\") pod \"image-registry-66df7c8f76-99v9q\" (UID: \"ea868a51-0b65-4a83-8a83-d522340127b8\") " pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.492141 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e8cfc05c-2e36-402b-9550-3c63d83d4ccc-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lf9f6\" (UID: \"e8cfc05c-2e36-402b-9550-3c63d83d4ccc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lf9f6" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.581759 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.603583 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e8cfc05c-2e36-402b-9550-3c63d83d4ccc-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lf9f6\" (UID: \"e8cfc05c-2e36-402b-9550-3c63d83d4ccc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lf9f6" Nov 21 13:39:14 crc kubenswrapper[4675]: E1121 13:39:14.603745 4675 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Nov 21 13:39:14 crc kubenswrapper[4675]: E1121 13:39:14.603832 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8cfc05c-2e36-402b-9550-3c63d83d4ccc-tls-certificates podName:e8cfc05c-2e36-402b-9550-3c63d83d4ccc nodeName:}" failed. No retries permitted until 2025-11-21 13:39:15.103809871 +0000 UTC m=+431.830224608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/e8cfc05c-2e36-402b-9550-3c63d83d4ccc-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-lf9f6" (UID: "e8cfc05c-2e36-402b-9550-3c63d83d4ccc") : secret "prometheus-operator-admission-webhook-tls" not found Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.748505 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-99v9q"] Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.759400 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx" event={"ID":"51d375b1-7ede-4bf8-a9a4-1065f72539b0","Type":"ContainerStarted","Data":"b90a530db2ba07ab79f213cbb53a64348745f029cc8150787a48454b273fa65e"} Nov 21 13:39:14 crc kubenswrapper[4675]: I1121 13:39:14.772631 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-w7skx" podStartSLOduration=1.8371842950000001 podStartE2EDuration="5.772615291s" podCreationTimestamp="2025-11-21 13:39:09 +0000 UTC" firstStartedPulling="2025-11-21 13:39:09.687679261 +0000 UTC m=+426.414093998" lastFinishedPulling="2025-11-21 13:39:13.623110267 +0000 UTC m=+430.349524994" observedRunningTime="2025-11-21 13:39:14.771377108 +0000 UTC m=+431.497791835" watchObservedRunningTime="2025-11-21 13:39:14.772615291 +0000 UTC m=+431.499030018" Nov 21 13:39:15 crc kubenswrapper[4675]: I1121 13:39:15.109436 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e8cfc05c-2e36-402b-9550-3c63d83d4ccc-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lf9f6\" (UID: \"e8cfc05c-2e36-402b-9550-3c63d83d4ccc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lf9f6" Nov 21 13:39:15 crc kubenswrapper[4675]: I1121 13:39:15.115845 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e8cfc05c-2e36-402b-9550-3c63d83d4ccc-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lf9f6\" (UID: \"e8cfc05c-2e36-402b-9550-3c63d83d4ccc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lf9f6" Nov 21 13:39:15 crc kubenswrapper[4675]: I1121 13:39:15.338691 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lf9f6" Nov 21 13:39:15 crc kubenswrapper[4675]: I1121 13:39:15.568749 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lf9f6"] Nov 21 13:39:15 crc kubenswrapper[4675]: W1121 13:39:15.575865 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8cfc05c_2e36_402b_9550_3c63d83d4ccc.slice/crio-d5ba8e950b52934de8a6db42817b2a300a2a526acb9d621e94f7e196594d2d1c WatchSource:0}: Error finding container d5ba8e950b52934de8a6db42817b2a300a2a526acb9d621e94f7e196594d2d1c: Status 404 returned error can't find the container with id d5ba8e950b52934de8a6db42817b2a300a2a526acb9d621e94f7e196594d2d1c Nov 21 13:39:15 crc kubenswrapper[4675]: I1121 13:39:15.769659 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lf9f6" event={"ID":"e8cfc05c-2e36-402b-9550-3c63d83d4ccc","Type":"ContainerStarted","Data":"d5ba8e950b52934de8a6db42817b2a300a2a526acb9d621e94f7e196594d2d1c"} Nov 21 13:39:15 crc kubenswrapper[4675]: I1121 13:39:15.771838 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" event={"ID":"ea868a51-0b65-4a83-8a83-d522340127b8","Type":"ContainerStarted","Data":"e53f3dfde1c28962557d2dda11b78bfb123b6223ea492db43e5ed1da1cc307d1"} Nov 21 13:39:15 crc kubenswrapper[4675]: I1121 13:39:15.771937 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" event={"ID":"ea868a51-0b65-4a83-8a83-d522340127b8","Type":"ContainerStarted","Data":"4ae684602bea8ffb309c3ba5e5c021af21fa67ce894dade65f785ff90ea7e357"} Nov 21 13:39:15 crc kubenswrapper[4675]: I1121 13:39:15.795332 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" podStartSLOduration=1.795298405 podStartE2EDuration="1.795298405s" podCreationTimestamp="2025-11-21 13:39:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:39:15.789010536 +0000 UTC m=+432.515425283" watchObservedRunningTime="2025-11-21 13:39:15.795298405 +0000 UTC m=+432.521713172" Nov 21 13:39:16 crc kubenswrapper[4675]: I1121 13:39:16.136580 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:39:16 crc kubenswrapper[4675]: I1121 13:39:16.136654 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:39:16 crc kubenswrapper[4675]: I1121 13:39:16.136714 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:39:16 crc kubenswrapper[4675]: I1121 13:39:16.137307 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"856f537a5bd9ff3fb685a8ef1888397fd35d05ca4e4e1461b5e1c59414d0ee63"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:39:16 crc kubenswrapper[4675]: I1121 13:39:16.137385 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://856f537a5bd9ff3fb685a8ef1888397fd35d05ca4e4e1461b5e1c59414d0ee63" gracePeriod=600 Nov 21 13:39:16 crc kubenswrapper[4675]: I1121 13:39:16.777986 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="856f537a5bd9ff3fb685a8ef1888397fd35d05ca4e4e1461b5e1c59414d0ee63" exitCode=0 Nov 21 13:39:16 crc kubenswrapper[4675]: I1121 13:39:16.778056 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"856f537a5bd9ff3fb685a8ef1888397fd35d05ca4e4e1461b5e1c59414d0ee63"} Nov 21 13:39:16 crc kubenswrapper[4675]: I1121 13:39:16.778130 4675 scope.go:117] "RemoveContainer" containerID="3bfbd9b9dd2503579c179a3cbc8803b42c39d661f45c97dcc4fa231bb9699434" Nov 21 13:39:16 crc kubenswrapper[4675]: I1121 13:39:16.778251 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:17 crc kubenswrapper[4675]: I1121 13:39:17.786161 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"e2db3d60559e3e1b30d576e8b6d70d42aa99aac71aa518a3a570555f006efdc7"} Nov 21 13:39:18 crc kubenswrapper[4675]: I1121 13:39:18.794360 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lf9f6" event={"ID":"e8cfc05c-2e36-402b-9550-3c63d83d4ccc","Type":"ContainerStarted","Data":"ebe8a8b30f85dd4d76c29b2ab930af4ee8a6404f6e9efe826b1e879a38203650"} Nov 21 13:39:18 crc kubenswrapper[4675]: I1121 13:39:18.811666 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lf9f6" podStartSLOduration=2.703203223 podStartE2EDuration="4.811643966s" podCreationTimestamp="2025-11-21 13:39:14 +0000 UTC" firstStartedPulling="2025-11-21 13:39:15.579877021 +0000 UTC m=+432.306291758" lastFinishedPulling="2025-11-21 13:39:17.688317774 +0000 UTC m=+434.414732501" observedRunningTime="2025-11-21 13:39:18.810190997 +0000 UTC m=+435.536605774" watchObservedRunningTime="2025-11-21 13:39:18.811643966 +0000 UTC m=+435.538058703" Nov 21 13:39:19 crc kubenswrapper[4675]: I1121 13:39:19.798671 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lf9f6" Nov 21 13:39:19 crc kubenswrapper[4675]: I1121 13:39:19.802913 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lf9f6" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.511109 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-76lfd"] Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.512501 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.517358 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.517968 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-w9k2g" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.519176 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.521012 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.576726 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-76lfd"] Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.690555 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-76lfd\" (UID: \"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.690617 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kljvr\" (UniqueName: \"kubernetes.io/projected/bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d-kube-api-access-kljvr\") pod \"prometheus-operator-db54df47d-76lfd\" (UID: \"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.690646 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-76lfd\" (UID: \"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.690662 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d-metrics-client-ca\") pod \"prometheus-operator-db54df47d-76lfd\" (UID: \"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.792324 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-76lfd\" (UID: \"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.792430 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kljvr\" (UniqueName: \"kubernetes.io/projected/bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d-kube-api-access-kljvr\") pod \"prometheus-operator-db54df47d-76lfd\" (UID: \"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.792484 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-76lfd\" (UID: \"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.792517 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d-metrics-client-ca\") pod \"prometheus-operator-db54df47d-76lfd\" (UID: \"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.794177 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d-metrics-client-ca\") pod \"prometheus-operator-db54df47d-76lfd\" (UID: \"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.797768 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-76lfd\" (UID: \"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.804356 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-76lfd\" (UID: \"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.812292 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kljvr\" (UniqueName: \"kubernetes.io/projected/bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d-kube-api-access-kljvr\") pod \"prometheus-operator-db54df47d-76lfd\" (UID: \"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" Nov 21 13:39:20 crc kubenswrapper[4675]: I1121 13:39:20.828854 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" Nov 21 13:39:21 crc kubenswrapper[4675]: I1121 13:39:21.046704 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-76lfd"] Nov 21 13:39:21 crc kubenswrapper[4675]: I1121 13:39:21.810641 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" event={"ID":"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d","Type":"ContainerStarted","Data":"a57725f738b88bbf1d64582f09f9edf380afd0c0b2924d9833aabcbbb98fac8a"} Nov 21 13:39:23 crc kubenswrapper[4675]: I1121 13:39:23.822258 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" event={"ID":"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d","Type":"ContainerStarted","Data":"059504e3989c3f8ad8890c9abec75fd033976b5e6009f8d78cde63ce661db24d"} Nov 21 13:39:23 crc kubenswrapper[4675]: I1121 13:39:23.822601 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" event={"ID":"bd1aaf10-c3d3-40f4-9a2e-1c33b4af862d","Type":"ContainerStarted","Data":"a44d8dd593600711d205ff0ea413fa8e5c0e743268c6fb812e1959d5b4903fbd"} Nov 21 13:39:23 crc kubenswrapper[4675]: I1121 13:39:23.837039 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-76lfd" podStartSLOduration=1.745776703 podStartE2EDuration="3.837019375s" podCreationTimestamp="2025-11-21 13:39:20 +0000 UTC" firstStartedPulling="2025-11-21 13:39:21.054960215 +0000 UTC m=+437.781374952" lastFinishedPulling="2025-11-21 13:39:23.146202887 +0000 UTC m=+439.872617624" observedRunningTime="2025-11-21 13:39:23.835277539 +0000 UTC m=+440.561692286" watchObservedRunningTime="2025-11-21 13:39:23.837019375 +0000 UTC m=+440.563434112" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.788122 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-qd66k"] Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.789450 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.791701 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-pqndz" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.792270 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.792286 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.805869 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-qd66k"] Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.831764 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-knk6q"] Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.832882 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.834118 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5"] Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.835172 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.836115 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9fgtn" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.836232 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.836318 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.840360 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.840472 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.840660 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.840722 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-wkb5p" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.858779 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5"] Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967256 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmg4d\" (UniqueName: \"kubernetes.io/projected/c818ea5a-0855-474d-acd1-f61712e082e6-kube-api-access-gmg4d\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967312 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c818ea5a-0855-474d-acd1-f61712e082e6-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967339 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f743fa20-4f79-4263-959f-e47c664aa064-metrics-client-ca\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967368 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f743fa20-4f79-4263-959f-e47c664aa064-node-exporter-wtmp\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967392 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1743ed2f-3fcf-40a9-90c4-1c794f26092f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-qd66k\" (UID: \"1743ed2f-3fcf-40a9-90c4-1c794f26092f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967500 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/f743fa20-4f79-4263-959f-e47c664aa064-node-exporter-textfile\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967561 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c818ea5a-0855-474d-acd1-f61712e082e6-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967609 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqwpn\" (UniqueName: \"kubernetes.io/projected/f743fa20-4f79-4263-959f-e47c664aa064-kube-api-access-cqwpn\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967648 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f743fa20-4f79-4263-959f-e47c664aa064-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967704 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f743fa20-4f79-4263-959f-e47c664aa064-sys\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967792 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1743ed2f-3fcf-40a9-90c4-1c794f26092f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-qd66k\" (UID: \"1743ed2f-3fcf-40a9-90c4-1c794f26092f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967832 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c818ea5a-0855-474d-acd1-f61712e082e6-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967849 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjkgt\" (UniqueName: \"kubernetes.io/projected/1743ed2f-3fcf-40a9-90c4-1c794f26092f-kube-api-access-kjkgt\") pod \"openshift-state-metrics-566fddb674-qd66k\" (UID: \"1743ed2f-3fcf-40a9-90c4-1c794f26092f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967868 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1743ed2f-3fcf-40a9-90c4-1c794f26092f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-qd66k\" (UID: \"1743ed2f-3fcf-40a9-90c4-1c794f26092f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967918 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f743fa20-4f79-4263-959f-e47c664aa064-node-exporter-tls\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967950 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f743fa20-4f79-4263-959f-e47c664aa064-root\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967974 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c818ea5a-0855-474d-acd1-f61712e082e6-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:25 crc kubenswrapper[4675]: I1121 13:39:25.967994 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c818ea5a-0855-474d-acd1-f61712e082e6-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.069059 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c818ea5a-0855-474d-acd1-f61712e082e6-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.069127 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmg4d\" (UniqueName: \"kubernetes.io/projected/c818ea5a-0855-474d-acd1-f61712e082e6-kube-api-access-gmg4d\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.069147 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c818ea5a-0855-474d-acd1-f61712e082e6-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.069168 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f743fa20-4f79-4263-959f-e47c664aa064-metrics-client-ca\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.069195 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f743fa20-4f79-4263-959f-e47c664aa064-node-exporter-wtmp\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.069210 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1743ed2f-3fcf-40a9-90c4-1c794f26092f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-qd66k\" (UID: \"1743ed2f-3fcf-40a9-90c4-1c794f26092f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.069230 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/f743fa20-4f79-4263-959f-e47c664aa064-node-exporter-textfile\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.069252 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c818ea5a-0855-474d-acd1-f61712e082e6-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.069274 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqwpn\" (UniqueName: \"kubernetes.io/projected/f743fa20-4f79-4263-959f-e47c664aa064-kube-api-access-cqwpn\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.069488 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f743fa20-4f79-4263-959f-e47c664aa064-node-exporter-wtmp\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.070651 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f743fa20-4f79-4263-959f-e47c664aa064-metrics-client-ca\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.069294 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f743fa20-4f79-4263-959f-e47c664aa064-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.070769 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f743fa20-4f79-4263-959f-e47c664aa064-sys\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.070798 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c818ea5a-0855-474d-acd1-f61712e082e6-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.070827 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1743ed2f-3fcf-40a9-90c4-1c794f26092f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-qd66k\" (UID: \"1743ed2f-3fcf-40a9-90c4-1c794f26092f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.070900 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c818ea5a-0855-474d-acd1-f61712e082e6-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.070937 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjkgt\" (UniqueName: \"kubernetes.io/projected/1743ed2f-3fcf-40a9-90c4-1c794f26092f-kube-api-access-kjkgt\") pod \"openshift-state-metrics-566fddb674-qd66k\" (UID: \"1743ed2f-3fcf-40a9-90c4-1c794f26092f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.070977 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1743ed2f-3fcf-40a9-90c4-1c794f26092f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-qd66k\" (UID: \"1743ed2f-3fcf-40a9-90c4-1c794f26092f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.071083 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f743fa20-4f79-4263-959f-e47c664aa064-node-exporter-tls\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.071149 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f743fa20-4f79-4263-959f-e47c664aa064-root\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.071180 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c818ea5a-0855-474d-acd1-f61712e082e6-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.071729 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c818ea5a-0855-474d-acd1-f61712e082e6-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.072345 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1743ed2f-3fcf-40a9-90c4-1c794f26092f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-qd66k\" (UID: \"1743ed2f-3fcf-40a9-90c4-1c794f26092f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.072390 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c818ea5a-0855-474d-acd1-f61712e082e6-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.072672 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f743fa20-4f79-4263-959f-e47c664aa064-root\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.073120 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f743fa20-4f79-4263-959f-e47c664aa064-sys\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.073427 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/f743fa20-4f79-4263-959f-e47c664aa064-node-exporter-textfile\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.079362 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1743ed2f-3fcf-40a9-90c4-1c794f26092f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-qd66k\" (UID: \"1743ed2f-3fcf-40a9-90c4-1c794f26092f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.080460 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c818ea5a-0855-474d-acd1-f61712e082e6-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.089179 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f743fa20-4f79-4263-959f-e47c664aa064-node-exporter-tls\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.091105 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1743ed2f-3fcf-40a9-90c4-1c794f26092f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-qd66k\" (UID: \"1743ed2f-3fcf-40a9-90c4-1c794f26092f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.093995 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c818ea5a-0855-474d-acd1-f61712e082e6-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.094560 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f743fa20-4f79-4263-959f-e47c664aa064-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.094930 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmg4d\" (UniqueName: \"kubernetes.io/projected/c818ea5a-0855-474d-acd1-f61712e082e6-kube-api-access-gmg4d\") pod \"kube-state-metrics-777cb5bd5d-5c7b5\" (UID: \"c818ea5a-0855-474d-acd1-f61712e082e6\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.099823 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjkgt\" (UniqueName: \"kubernetes.io/projected/1743ed2f-3fcf-40a9-90c4-1c794f26092f-kube-api-access-kjkgt\") pod \"openshift-state-metrics-566fddb674-qd66k\" (UID: \"1743ed2f-3fcf-40a9-90c4-1c794f26092f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.104022 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.110998 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqwpn\" (UniqueName: \"kubernetes.io/projected/f743fa20-4f79-4263-959f-e47c664aa064-kube-api-access-cqwpn\") pod \"node-exporter-knk6q\" (UID: \"f743fa20-4f79-4263-959f-e47c664aa064\") " pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.154408 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-knk6q" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.160857 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.320424 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-qd66k"] Nov 21 13:39:26 crc kubenswrapper[4675]: W1121 13:39:26.325485 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1743ed2f_3fcf_40a9_90c4_1c794f26092f.slice/crio-f1e0460fe05acd253ab9d84aae30fdde98d8de5eab2e0c416a1f307ca4555b67 WatchSource:0}: Error finding container f1e0460fe05acd253ab9d84aae30fdde98d8de5eab2e0c416a1f307ca4555b67: Status 404 returned error can't find the container with id f1e0460fe05acd253ab9d84aae30fdde98d8de5eab2e0c416a1f307ca4555b67 Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.384371 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5"] Nov 21 13:39:26 crc kubenswrapper[4675]: W1121 13:39:26.397885 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc818ea5a_0855_474d_acd1_f61712e082e6.slice/crio-1921bf0e6d11448c40b11f157b6d91772c788032aec3fb6c716713710cff94d5 WatchSource:0}: Error finding container 1921bf0e6d11448c40b11f157b6d91772c788032aec3fb6c716713710cff94d5: Status 404 returned error can't find the container with id 1921bf0e6d11448c40b11f157b6d91772c788032aec3fb6c716713710cff94d5 Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.856548 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" event={"ID":"c818ea5a-0855-474d-acd1-f61712e082e6","Type":"ContainerStarted","Data":"1921bf0e6d11448c40b11f157b6d91772c788032aec3fb6c716713710cff94d5"} Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.858453 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-knk6q" event={"ID":"f743fa20-4f79-4263-959f-e47c664aa064","Type":"ContainerStarted","Data":"8dfefcf0b49cb26f2da6095082ffe6fba708e55668d4be1503f99b0b07010e5e"} Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.860449 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" event={"ID":"1743ed2f-3fcf-40a9-90c4-1c794f26092f","Type":"ContainerStarted","Data":"dfb914d27436986aa7ccdbff317f4e6febcdd17b3d0fcb8cab9a42a607f8db23"} Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.860469 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" event={"ID":"1743ed2f-3fcf-40a9-90c4-1c794f26092f","Type":"ContainerStarted","Data":"df2284b89817a33ccf8528bdcb54bf9db886ee3078149961845917597ec7a7a3"} Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.860478 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" event={"ID":"1743ed2f-3fcf-40a9-90c4-1c794f26092f","Type":"ContainerStarted","Data":"f1e0460fe05acd253ab9d84aae30fdde98d8de5eab2e0c416a1f307ca4555b67"} Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.880178 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.882092 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.885816 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.886030 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.886157 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.886290 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.886367 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.886452 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-ln6bh" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.886554 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.886797 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.896797 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.902092 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.985463 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xztfm\" (UniqueName: \"kubernetes.io/projected/2fee1cb5-612f-4214-83d7-93154c4957cc-kube-api-access-xztfm\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.985511 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2fee1cb5-612f-4214-83d7-93154c4957cc-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.985530 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2fee1cb5-612f-4214-83d7-93154c4957cc-config-out\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.985548 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2fee1cb5-612f-4214-83d7-93154c4957cc-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.985568 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.985588 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fee1cb5-612f-4214-83d7-93154c4957cc-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.985619 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.985642 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2fee1cb5-612f-4214-83d7-93154c4957cc-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.985666 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.985687 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-web-config\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.985731 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:26 crc kubenswrapper[4675]: I1121 13:39:26.985814 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-config-volume\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.087264 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.088011 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-config-volume\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.088051 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xztfm\" (UniqueName: \"kubernetes.io/projected/2fee1cb5-612f-4214-83d7-93154c4957cc-kube-api-access-xztfm\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.088148 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2fee1cb5-612f-4214-83d7-93154c4957cc-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.088177 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2fee1cb5-612f-4214-83d7-93154c4957cc-config-out\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.088201 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2fee1cb5-612f-4214-83d7-93154c4957cc-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.088223 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.088257 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fee1cb5-612f-4214-83d7-93154c4957cc-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.088296 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.088329 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2fee1cb5-612f-4214-83d7-93154c4957cc-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.088360 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.088385 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-web-config\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.088916 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2fee1cb5-612f-4214-83d7-93154c4957cc-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.089502 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2fee1cb5-612f-4214-83d7-93154c4957cc-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.090110 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fee1cb5-612f-4214-83d7-93154c4957cc-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.093441 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.093832 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.093970 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-config-volume\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.100230 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.100804 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.100830 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2fee1cb5-612f-4214-83d7-93154c4957cc-web-config\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.104699 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2fee1cb5-612f-4214-83d7-93154c4957cc-config-out\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.106512 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2fee1cb5-612f-4214-83d7-93154c4957cc-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.109746 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xztfm\" (UniqueName: \"kubernetes.io/projected/2fee1cb5-612f-4214-83d7-93154c4957cc-kube-api-access-xztfm\") pod \"alertmanager-main-0\" (UID: \"2fee1cb5-612f-4214-83d7-93154c4957cc\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.199185 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.431793 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.765580 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5fb587c56b-z7594"] Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.767225 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.770741 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.770997 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.771199 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.771631 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.771661 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.771714 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-6amudslo316fd" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.771858 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-5784k" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.790731 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5fb587c56b-z7594"] Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.867501 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2fee1cb5-612f-4214-83d7-93154c4957cc","Type":"ContainerStarted","Data":"d6285485201ef5d1dc35549d9ebd060943e020dca8390535a19e4c3d51cb4078"} Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.901111 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-tls\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.901398 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.901532 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkskk\" (UniqueName: \"kubernetes.io/projected/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-kube-api-access-rkskk\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.901678 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-grpc-tls\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.901812 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.901903 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-metrics-client-ca\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.901984 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:27 crc kubenswrapper[4675]: I1121 13:39:27.902097 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.004077 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.004155 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkskk\" (UniqueName: \"kubernetes.io/projected/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-kube-api-access-rkskk\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.004205 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-grpc-tls\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.004253 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.004270 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-metrics-client-ca\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.004290 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.004345 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.004394 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-tls\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.005555 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-metrics-client-ca\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.008471 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.008481 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.008584 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.008984 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-grpc-tls\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.009800 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.014501 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-secret-thanos-querier-tls\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.020320 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkskk\" (UniqueName: \"kubernetes.io/projected/d50dea3b-6a88-4ad4-a6be-0e27adcfd49d-kube-api-access-rkskk\") pod \"thanos-querier-5fb587c56b-z7594\" (UID: \"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d\") " pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:28 crc kubenswrapper[4675]: I1121 13:39:28.088517 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.632502 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5b67d74f5d-4dn67"] Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.633623 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.667958 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b67d74f5d-4dn67"] Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.751433 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-oauth-config\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.751476 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-config\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.751499 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-service-ca\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.751531 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-oauth-serving-cert\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.751554 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfhbd\" (UniqueName: \"kubernetes.io/projected/bc475023-1d7f-46cc-a70a-489c5c1b6643-kube-api-access-hfhbd\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.751569 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-trusted-ca-bundle\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.751587 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-serving-cert\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.852686 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-oauth-serving-cert\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.852739 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfhbd\" (UniqueName: \"kubernetes.io/projected/bc475023-1d7f-46cc-a70a-489c5c1b6643-kube-api-access-hfhbd\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.852760 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-trusted-ca-bundle\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.852783 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-serving-cert\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.852859 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-oauth-config\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.852884 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-config\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.852912 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-service-ca\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.853589 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-oauth-serving-cert\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.853977 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-service-ca\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.854155 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-config\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.854435 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-trusted-ca-bundle\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.864922 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-oauth-config\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.868902 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-serving-cert\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.872025 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfhbd\" (UniqueName: \"kubernetes.io/projected/bc475023-1d7f-46cc-a70a-489c5c1b6643-kube-api-access-hfhbd\") pod \"console-5b67d74f5d-4dn67\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:30 crc kubenswrapper[4675]: I1121 13:39:30.948182 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.088572 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-68c5fdf698-xvbg2"] Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.090175 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.098198 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.098275 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.099009 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-92dxz" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.099309 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.100517 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.100981 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-3tmdh4mjp2e06" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.107625 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-68c5fdf698-xvbg2"] Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.257233 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-secret-metrics-server-tls\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.257531 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-client-ca-bundle\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.257685 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8hrr\" (UniqueName: \"kubernetes.io/projected/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-kube-api-access-l8hrr\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.257779 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.257891 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-metrics-server-audit-profiles\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.257987 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-secret-metrics-client-certs\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.258196 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-audit-log\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.360214 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8hrr\" (UniqueName: \"kubernetes.io/projected/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-kube-api-access-l8hrr\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.360300 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.360337 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-metrics-server-audit-profiles\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.360389 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-secret-metrics-client-certs\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.360454 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-audit-log\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.360503 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-secret-metrics-server-tls\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.360545 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-client-ca-bundle\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.361441 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-audit-log\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.361762 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.363423 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-metrics-server-audit-profiles\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.364162 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-secret-metrics-server-tls\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.364632 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-secret-metrics-client-certs\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.364657 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-client-ca-bundle\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.389650 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8hrr\" (UniqueName: \"kubernetes.io/projected/3d4f71fd-5b53-4dd2-9404-fa271dedb59f-kube-api-access-l8hrr\") pod \"metrics-server-68c5fdf698-xvbg2\" (UID: \"3d4f71fd-5b53-4dd2-9404-fa271dedb59f\") " pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.412804 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.602857 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-895f7f958-9lngk"] Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.603695 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-895f7f958-9lngk" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.605547 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.605933 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.611192 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-895f7f958-9lngk"] Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.665405 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b079f4aa-2bbc-4de3-9501-be178900f177-monitoring-plugin-cert\") pod \"monitoring-plugin-895f7f958-9lngk\" (UID: \"b079f4aa-2bbc-4de3-9501-be178900f177\") " pod="openshift-monitoring/monitoring-plugin-895f7f958-9lngk" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.766738 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b079f4aa-2bbc-4de3-9501-be178900f177-monitoring-plugin-cert\") pod \"monitoring-plugin-895f7f958-9lngk\" (UID: \"b079f4aa-2bbc-4de3-9501-be178900f177\") " pod="openshift-monitoring/monitoring-plugin-895f7f958-9lngk" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.770868 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b079f4aa-2bbc-4de3-9501-be178900f177-monitoring-plugin-cert\") pod \"monitoring-plugin-895f7f958-9lngk\" (UID: \"b079f4aa-2bbc-4de3-9501-be178900f177\") " pod="openshift-monitoring/monitoring-plugin-895f7f958-9lngk" Nov 21 13:39:31 crc kubenswrapper[4675]: I1121 13:39:31.919163 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-895f7f958-9lngk" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.055747 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.057799 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.064109 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.064342 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.064689 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.064438 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-fcrns" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.065392 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.065408 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.066233 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.066555 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-2albafkt2pknp" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.066611 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.066784 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.066998 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070536 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070574 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070594 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070615 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-config-out\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070636 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-web-config\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070650 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070670 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070691 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070717 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070748 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070783 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070807 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070829 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070870 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-config\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070894 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xplhr\" (UniqueName: \"kubernetes.io/projected/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-kube-api-access-xplhr\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070923 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070951 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.070978 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.073239 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.076199 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.077920 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.171821 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.171860 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.171887 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.172693 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.172713 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.172728 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.172745 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-config-out\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.172760 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-web-config\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.172827 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.172843 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.173022 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.173058 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.173102 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.173131 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.173151 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.173167 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.173210 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-config\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.173228 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xplhr\" (UniqueName: \"kubernetes.io/projected/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-kube-api-access-xplhr\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.173549 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.173914 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.174724 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.175914 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.176403 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.176768 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.176782 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.178344 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.178515 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-config-out\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.178630 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.178644 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-web-config\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.179856 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.180497 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.180510 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-config\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.180757 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.181324 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.188396 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xplhr\" (UniqueName: \"kubernetes.io/projected/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-kube-api-access-xplhr\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.612136 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f5ced7d3-4bda-4764-ae64-58ffcaf2899a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"f5ced7d3-4bda-4764-ae64-58ffcaf2899a\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:32 crc kubenswrapper[4675]: I1121 13:39:32.674768 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:34 crc kubenswrapper[4675]: I1121 13:39:34.589605 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-99v9q" Nov 21 13:39:34 crc kubenswrapper[4675]: I1121 13:39:34.687911 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nd8c"] Nov 21 13:39:35 crc kubenswrapper[4675]: I1121 13:39:35.029039 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5fb587c56b-z7594"] Nov 21 13:39:35 crc kubenswrapper[4675]: I1121 13:39:35.267465 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-895f7f958-9lngk"] Nov 21 13:39:35 crc kubenswrapper[4675]: I1121 13:39:35.270087 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b67d74f5d-4dn67"] Nov 21 13:39:35 crc kubenswrapper[4675]: W1121 13:39:35.272438 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb079f4aa_2bbc_4de3_9501_be178900f177.slice/crio-4d4cd389bae6d6a7867f7b15a061814c8fdbca61bfa4534a6fd5cb7ee2ec4ffc WatchSource:0}: Error finding container 4d4cd389bae6d6a7867f7b15a061814c8fdbca61bfa4534a6fd5cb7ee2ec4ffc: Status 404 returned error can't find the container with id 4d4cd389bae6d6a7867f7b15a061814c8fdbca61bfa4534a6fd5cb7ee2ec4ffc Nov 21 13:39:35 crc kubenswrapper[4675]: W1121 13:39:35.274682 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc475023_1d7f_46cc_a70a_489c5c1b6643.slice/crio-57ad39481a7daf3ad3bff402c79c7af15cec5d836a6a98fa4f5dc178c3c33eb1 WatchSource:0}: Error finding container 57ad39481a7daf3ad3bff402c79c7af15cec5d836a6a98fa4f5dc178c3c33eb1: Status 404 returned error can't find the container with id 57ad39481a7daf3ad3bff402c79c7af15cec5d836a6a98fa4f5dc178c3c33eb1 Nov 21 13:39:35 crc kubenswrapper[4675]: I1121 13:39:35.328404 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 21 13:39:35 crc kubenswrapper[4675]: W1121 13:39:35.341246 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5ced7d3_4bda_4764_ae64_58ffcaf2899a.slice/crio-b426d2833e9f923349b3e841c5b627754559509a016a16f494f2bc547031a324 WatchSource:0}: Error finding container b426d2833e9f923349b3e841c5b627754559509a016a16f494f2bc547031a324: Status 404 returned error can't find the container with id b426d2833e9f923349b3e841c5b627754559509a016a16f494f2bc547031a324 Nov 21 13:39:35 crc kubenswrapper[4675]: I1121 13:39:35.381529 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-68c5fdf698-xvbg2"] Nov 21 13:39:35 crc kubenswrapper[4675]: W1121 13:39:35.384631 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d4f71fd_5b53_4dd2_9404_fa271dedb59f.slice/crio-ca8e997a956037baf2ee28570477fc61fdd1211bc07cde4aa2334f632e890fcd WatchSource:0}: Error finding container ca8e997a956037baf2ee28570477fc61fdd1211bc07cde4aa2334f632e890fcd: Status 404 returned error can't find the container with id ca8e997a956037baf2ee28570477fc61fdd1211bc07cde4aa2334f632e890fcd Nov 21 13:39:35 crc kubenswrapper[4675]: I1121 13:39:35.913046 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" event={"ID":"c818ea5a-0855-474d-acd1-f61712e082e6","Type":"ContainerStarted","Data":"556347edf1e58fcad3a43d3bc99401d0737c98be969d46ed936d2b3e0812fe63"} Nov 21 13:39:35 crc kubenswrapper[4675]: I1121 13:39:35.914264 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"f5ced7d3-4bda-4764-ae64-58ffcaf2899a","Type":"ContainerStarted","Data":"b426d2833e9f923349b3e841c5b627754559509a016a16f494f2bc547031a324"} Nov 21 13:39:35 crc kubenswrapper[4675]: I1121 13:39:35.915290 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-895f7f958-9lngk" event={"ID":"b079f4aa-2bbc-4de3-9501-be178900f177","Type":"ContainerStarted","Data":"4d4cd389bae6d6a7867f7b15a061814c8fdbca61bfa4534a6fd5cb7ee2ec4ffc"} Nov 21 13:39:35 crc kubenswrapper[4675]: I1121 13:39:35.916688 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-knk6q" event={"ID":"f743fa20-4f79-4263-959f-e47c664aa064","Type":"ContainerStarted","Data":"0f3082db9ede8f2caa0e36daee363eb3aa372055d4d9f73749681538e2c690a2"} Nov 21 13:39:35 crc kubenswrapper[4675]: I1121 13:39:35.917720 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" event={"ID":"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d","Type":"ContainerStarted","Data":"b658a6e3b0adc4f64b6c5d565edf7d64034abca8b37caa46d5cbfe547788c7aa"} Nov 21 13:39:35 crc kubenswrapper[4675]: I1121 13:39:35.918630 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" event={"ID":"3d4f71fd-5b53-4dd2-9404-fa271dedb59f","Type":"ContainerStarted","Data":"ca8e997a956037baf2ee28570477fc61fdd1211bc07cde4aa2334f632e890fcd"} Nov 21 13:39:35 crc kubenswrapper[4675]: I1121 13:39:35.920448 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" event={"ID":"1743ed2f-3fcf-40a9-90c4-1c794f26092f","Type":"ContainerStarted","Data":"374b15f01fe7323bbf18bd1b352d729e7ed1b8a41ae5fbad610f00227c0ce0c2"} Nov 21 13:39:35 crc kubenswrapper[4675]: I1121 13:39:35.921378 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b67d74f5d-4dn67" event={"ID":"bc475023-1d7f-46cc-a70a-489c5c1b6643","Type":"ContainerStarted","Data":"57ad39481a7daf3ad3bff402c79c7af15cec5d836a6a98fa4f5dc178c3c33eb1"} Nov 21 13:39:36 crc kubenswrapper[4675]: I1121 13:39:36.933211 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" event={"ID":"c818ea5a-0855-474d-acd1-f61712e082e6","Type":"ContainerStarted","Data":"8ef316adae5c6e09d17fd0b6618f8036dfbb48e06877f7d8d5c30ff00fa27480"} Nov 21 13:39:36 crc kubenswrapper[4675]: I1121 13:39:36.933559 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" event={"ID":"c818ea5a-0855-474d-acd1-f61712e082e6","Type":"ContainerStarted","Data":"b2bf6e1761c37adf1047e2ebd4e3943d7afc20e3202487ce5884af73a118f5f9"} Nov 21 13:39:36 crc kubenswrapper[4675]: I1121 13:39:36.935078 4675 generic.go:334] "Generic (PLEG): container finished" podID="f743fa20-4f79-4263-959f-e47c664aa064" containerID="0f3082db9ede8f2caa0e36daee363eb3aa372055d4d9f73749681538e2c690a2" exitCode=0 Nov 21 13:39:36 crc kubenswrapper[4675]: I1121 13:39:36.935164 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-knk6q" event={"ID":"f743fa20-4f79-4263-959f-e47c664aa064","Type":"ContainerDied","Data":"0f3082db9ede8f2caa0e36daee363eb3aa372055d4d9f73749681538e2c690a2"} Nov 21 13:39:36 crc kubenswrapper[4675]: I1121 13:39:36.938594 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b67d74f5d-4dn67" event={"ID":"bc475023-1d7f-46cc-a70a-489c5c1b6643","Type":"ContainerStarted","Data":"0dcf0122533f4fa005057cf08479c4f324116b347fb78bedc1827e1ff93bb474"} Nov 21 13:39:36 crc kubenswrapper[4675]: I1121 13:39:36.976806 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-qd66k" podStartSLOduration=3.781826697 podStartE2EDuration="11.976790011s" podCreationTimestamp="2025-11-21 13:39:25 +0000 UTC" firstStartedPulling="2025-11-21 13:39:26.545467495 +0000 UTC m=+443.271882222" lastFinishedPulling="2025-11-21 13:39:34.740430809 +0000 UTC m=+451.466845536" observedRunningTime="2025-11-21 13:39:36.974883531 +0000 UTC m=+453.701298268" watchObservedRunningTime="2025-11-21 13:39:36.976790011 +0000 UTC m=+453.703204738" Nov 21 13:39:36 crc kubenswrapper[4675]: I1121 13:39:36.998344 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5b67d74f5d-4dn67" podStartSLOduration=6.998321424 podStartE2EDuration="6.998321424s" podCreationTimestamp="2025-11-21 13:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:39:36.995516351 +0000 UTC m=+453.721931078" watchObservedRunningTime="2025-11-21 13:39:36.998321424 +0000 UTC m=+453.724736161" Nov 21 13:39:37 crc kubenswrapper[4675]: I1121 13:39:37.971986 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-5c7b5" podStartSLOduration=4.637073677 podStartE2EDuration="12.971957273s" podCreationTimestamp="2025-11-21 13:39:25 +0000 UTC" firstStartedPulling="2025-11-21 13:39:26.404603697 +0000 UTC m=+443.131018434" lastFinishedPulling="2025-11-21 13:39:34.739487303 +0000 UTC m=+451.465902030" observedRunningTime="2025-11-21 13:39:37.967991459 +0000 UTC m=+454.694406186" watchObservedRunningTime="2025-11-21 13:39:37.971957273 +0000 UTC m=+454.698372000" Nov 21 13:39:38 crc kubenswrapper[4675]: I1121 13:39:38.951272 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"f5ced7d3-4bda-4764-ae64-58ffcaf2899a","Type":"ContainerStarted","Data":"805df1e5485b92147e1f6049936ff1a8f75f8e99fd3044fb805bae16b45c2d46"} Nov 21 13:39:38 crc kubenswrapper[4675]: I1121 13:39:38.954416 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-knk6q" event={"ID":"f743fa20-4f79-4263-959f-e47c664aa064","Type":"ContainerStarted","Data":"3322d7f9ab3227ad19ffb52b238ed5c3f4bc465f2f12623153e748b4699cdcf1"} Nov 21 13:39:38 crc kubenswrapper[4675]: I1121 13:39:38.954459 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-knk6q" event={"ID":"f743fa20-4f79-4263-959f-e47c664aa064","Type":"ContainerStarted","Data":"c6cc09cdfe453f978429be20cbb8bd57fc38c17d17cd31e533a181d9bdcc8656"} Nov 21 13:39:38 crc kubenswrapper[4675]: I1121 13:39:38.956161 4675 generic.go:334] "Generic (PLEG): container finished" podID="2fee1cb5-612f-4214-83d7-93154c4957cc" containerID="211c1401f22ab6a4cd1c57d4babdbf90ecc360618350efc41addb993a82bd265" exitCode=0 Nov 21 13:39:38 crc kubenswrapper[4675]: I1121 13:39:38.956199 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2fee1cb5-612f-4214-83d7-93154c4957cc","Type":"ContainerDied","Data":"211c1401f22ab6a4cd1c57d4babdbf90ecc360618350efc41addb993a82bd265"} Nov 21 13:39:39 crc kubenswrapper[4675]: I1121 13:39:39.024490 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-knk6q" podStartSLOduration=5.4687608690000005 podStartE2EDuration="14.024474465s" podCreationTimestamp="2025-11-21 13:39:25 +0000 UTC" firstStartedPulling="2025-11-21 13:39:26.184053495 +0000 UTC m=+442.910468222" lastFinishedPulling="2025-11-21 13:39:34.739767091 +0000 UTC m=+451.466181818" observedRunningTime="2025-11-21 13:39:39.022356899 +0000 UTC m=+455.748771626" watchObservedRunningTime="2025-11-21 13:39:39.024474465 +0000 UTC m=+455.750889192" Nov 21 13:39:39 crc kubenswrapper[4675]: I1121 13:39:39.967715 4675 generic.go:334] "Generic (PLEG): container finished" podID="f5ced7d3-4bda-4764-ae64-58ffcaf2899a" containerID="805df1e5485b92147e1f6049936ff1a8f75f8e99fd3044fb805bae16b45c2d46" exitCode=0 Nov 21 13:39:39 crc kubenswrapper[4675]: I1121 13:39:39.967873 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"f5ced7d3-4bda-4764-ae64-58ffcaf2899a","Type":"ContainerDied","Data":"805df1e5485b92147e1f6049936ff1a8f75f8e99fd3044fb805bae16b45c2d46"} Nov 21 13:39:40 crc kubenswrapper[4675]: I1121 13:39:40.949308 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:40 crc kubenswrapper[4675]: I1121 13:39:40.949395 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:40 crc kubenswrapper[4675]: I1121 13:39:40.955026 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:40 crc kubenswrapper[4675]: I1121 13:39:40.981638 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:39:41 crc kubenswrapper[4675]: I1121 13:39:41.049233 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cz299"] Nov 21 13:39:42 crc kubenswrapper[4675]: I1121 13:39:42.998850 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" event={"ID":"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d","Type":"ContainerStarted","Data":"b58965697f6b47cff7a0d8961a8f27c9675f4dc891ce28c7d6f2485885e74a0f"} Nov 21 13:39:43 crc kubenswrapper[4675]: I1121 13:39:42.999471 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" event={"ID":"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d","Type":"ContainerStarted","Data":"4772ed5156237e428c303def2406bc86c78ed47a04ca3c7735d7b4ec68bd4af9"} Nov 21 13:39:43 crc kubenswrapper[4675]: I1121 13:39:42.999513 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" event={"ID":"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d","Type":"ContainerStarted","Data":"e2b5b3c5e9965b2e97c6787f2b68d76127abc647271ec00b00da669591c9d2b4"} Nov 21 13:39:43 crc kubenswrapper[4675]: I1121 13:39:43.000375 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" event={"ID":"3d4f71fd-5b53-4dd2-9404-fa271dedb59f","Type":"ContainerStarted","Data":"f3db34477c2c6932be4549b2dd2ce932fdde3a1055119c7cf89ce234a8f2079e"} Nov 21 13:39:43 crc kubenswrapper[4675]: I1121 13:39:43.002277 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-895f7f958-9lngk" event={"ID":"b079f4aa-2bbc-4de3-9501-be178900f177","Type":"ContainerStarted","Data":"4597c23c1f71eb3d5340eabd23464506dd0c5a427fd0b4ce6be6ba5f54ca5721"} Nov 21 13:39:43 crc kubenswrapper[4675]: I1121 13:39:43.002808 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-895f7f958-9lngk" Nov 21 13:39:43 crc kubenswrapper[4675]: I1121 13:39:43.006859 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2fee1cb5-612f-4214-83d7-93154c4957cc","Type":"ContainerStarted","Data":"2964eda8d5ccde987b060e53a8b4423848508e41d2b33be5a616b238900f0876"} Nov 21 13:39:43 crc kubenswrapper[4675]: I1121 13:39:43.006902 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2fee1cb5-612f-4214-83d7-93154c4957cc","Type":"ContainerStarted","Data":"8fd967d9544b47f9cbe6c0eda35cf1e04bb5696d4c32aced74f84343253af033"} Nov 21 13:39:43 crc kubenswrapper[4675]: I1121 13:39:43.006912 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2fee1cb5-612f-4214-83d7-93154c4957cc","Type":"ContainerStarted","Data":"553a24f24f0930274d58731a0aca42e6acccd4d73fdc74d74fe457d4e742fb35"} Nov 21 13:39:43 crc kubenswrapper[4675]: I1121 13:39:43.008058 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-895f7f958-9lngk" Nov 21 13:39:43 crc kubenswrapper[4675]: I1121 13:39:43.034625 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" podStartSLOduration=5.373565993 podStartE2EDuration="12.034604143s" podCreationTimestamp="2025-11-21 13:39:31 +0000 UTC" firstStartedPulling="2025-11-21 13:39:35.386887214 +0000 UTC m=+452.113301941" lastFinishedPulling="2025-11-21 13:39:42.047925364 +0000 UTC m=+458.774340091" observedRunningTime="2025-11-21 13:39:43.019700764 +0000 UTC m=+459.746115491" watchObservedRunningTime="2025-11-21 13:39:43.034604143 +0000 UTC m=+459.761018870" Nov 21 13:39:43 crc kubenswrapper[4675]: I1121 13:39:43.035009 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-895f7f958-9lngk" podStartSLOduration=5.279027828 podStartE2EDuration="12.035002884s" podCreationTimestamp="2025-11-21 13:39:31 +0000 UTC" firstStartedPulling="2025-11-21 13:39:35.274598454 +0000 UTC m=+452.001013181" lastFinishedPulling="2025-11-21 13:39:42.03057351 +0000 UTC m=+458.756988237" observedRunningTime="2025-11-21 13:39:43.032211621 +0000 UTC m=+459.758626348" watchObservedRunningTime="2025-11-21 13:39:43.035002884 +0000 UTC m=+459.761417621" Nov 21 13:39:44 crc kubenswrapper[4675]: I1121 13:39:44.014581 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2fee1cb5-612f-4214-83d7-93154c4957cc","Type":"ContainerStarted","Data":"20cf6e74621ae49f30a1c2e66b900bfbe535df3c96f1093e9da7a920a1f20bb9"} Nov 21 13:39:45 crc kubenswrapper[4675]: I1121 13:39:45.023585 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2fee1cb5-612f-4214-83d7-93154c4957cc","Type":"ContainerStarted","Data":"49947598e18fbd99abe83d4a19cd7fcea9377daece84c78f4c9bc91380b74fde"} Nov 21 13:39:45 crc kubenswrapper[4675]: I1121 13:39:45.025529 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"f5ced7d3-4bda-4764-ae64-58ffcaf2899a","Type":"ContainerStarted","Data":"69970195286bfa7b832685b95a69709af6a5837b2a7282f9fcce7498f0f8c17c"} Nov 21 13:39:45 crc kubenswrapper[4675]: I1121 13:39:45.025577 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"f5ced7d3-4bda-4764-ae64-58ffcaf2899a","Type":"ContainerStarted","Data":"b88ba6f2d87a8400f97c00c368366a79f5d2992f38d095b88173f6fa80a5de30"} Nov 21 13:39:45 crc kubenswrapper[4675]: I1121 13:39:45.025590 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"f5ced7d3-4bda-4764-ae64-58ffcaf2899a","Type":"ContainerStarted","Data":"3edb85505d6298423581451c59cf0490bda0ff836143a438acc231ffa5c6b099"} Nov 21 13:39:46 crc kubenswrapper[4675]: I1121 13:39:46.034333 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2fee1cb5-612f-4214-83d7-93154c4957cc","Type":"ContainerStarted","Data":"07143dde4b0eee3cbbb26072be48dd9f33413d525e131c51216610a48558d287"} Nov 21 13:39:46 crc kubenswrapper[4675]: I1121 13:39:46.037741 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" event={"ID":"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d","Type":"ContainerStarted","Data":"83406f4e99226a177d15814112f2f89f4f9caa3ceb10d4268385462b6ddf7abd"} Nov 21 13:39:46 crc kubenswrapper[4675]: I1121 13:39:46.037773 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" event={"ID":"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d","Type":"ContainerStarted","Data":"c448ed1510023008f2d4a970b241a87ec6572ac7b1e1785c8006a3ab4cd9b81c"} Nov 21 13:39:46 crc kubenswrapper[4675]: I1121 13:39:46.037784 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" event={"ID":"d50dea3b-6a88-4ad4-a6be-0e27adcfd49d","Type":"ContainerStarted","Data":"bcc69e56d835490dcf3e9efe463c139413f8979cdcb72a7408e2c6db858c91d5"} Nov 21 13:39:46 crc kubenswrapper[4675]: I1121 13:39:46.037932 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:46 crc kubenswrapper[4675]: I1121 13:39:46.042916 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"f5ced7d3-4bda-4764-ae64-58ffcaf2899a","Type":"ContainerStarted","Data":"08f0105c2c6a712afdf2b9c4e5d891e8efc860c2f49cd09efec22d2c363d0cfc"} Nov 21 13:39:46 crc kubenswrapper[4675]: I1121 13:39:46.042988 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"f5ced7d3-4bda-4764-ae64-58ffcaf2899a","Type":"ContainerStarted","Data":"873d060351491f421f67883c94969dbb021ce1261f187ebe232f2a412b682eb1"} Nov 21 13:39:46 crc kubenswrapper[4675]: I1121 13:39:46.043006 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"f5ced7d3-4bda-4764-ae64-58ffcaf2899a","Type":"ContainerStarted","Data":"a223c524509fef12f6689036f0c70a0e45025c81c27c4c153fe5c0d7f8d24a40"} Nov 21 13:39:46 crc kubenswrapper[4675]: I1121 13:39:46.068565 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.678990021 podStartE2EDuration="20.068541936s" podCreationTimestamp="2025-11-21 13:39:26 +0000 UTC" firstStartedPulling="2025-11-21 13:39:27.669336941 +0000 UTC m=+444.395751668" lastFinishedPulling="2025-11-21 13:39:45.058888856 +0000 UTC m=+461.785303583" observedRunningTime="2025-11-21 13:39:46.063658179 +0000 UTC m=+462.790072956" watchObservedRunningTime="2025-11-21 13:39:46.068541936 +0000 UTC m=+462.794956663" Nov 21 13:39:46 crc kubenswrapper[4675]: I1121 13:39:46.115341 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.132127936 podStartE2EDuration="14.11532615s" podCreationTimestamp="2025-11-21 13:39:32 +0000 UTC" firstStartedPulling="2025-11-21 13:39:35.345352337 +0000 UTC m=+452.071767064" lastFinishedPulling="2025-11-21 13:39:44.328550551 +0000 UTC m=+461.054965278" observedRunningTime="2025-11-21 13:39:46.107648329 +0000 UTC m=+462.834063056" watchObservedRunningTime="2025-11-21 13:39:46.11532615 +0000 UTC m=+462.841740877" Nov 21 13:39:46 crc kubenswrapper[4675]: I1121 13:39:46.139017 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" podStartSLOduration=9.113953236 podStartE2EDuration="19.13900145s" podCreationTimestamp="2025-11-21 13:39:27 +0000 UTC" firstStartedPulling="2025-11-21 13:39:35.036477331 +0000 UTC m=+451.762892058" lastFinishedPulling="2025-11-21 13:39:45.061525545 +0000 UTC m=+461.787940272" observedRunningTime="2025-11-21 13:39:46.136455783 +0000 UTC m=+462.862870530" watchObservedRunningTime="2025-11-21 13:39:46.13900145 +0000 UTC m=+462.865416177" Nov 21 13:39:47 crc kubenswrapper[4675]: I1121 13:39:47.059211 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5fb587c56b-z7594" Nov 21 13:39:47 crc kubenswrapper[4675]: I1121 13:39:47.675227 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:39:51 crc kubenswrapper[4675]: I1121 13:39:51.413780 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:51 crc kubenswrapper[4675]: I1121 13:39:51.414180 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:39:59 crc kubenswrapper[4675]: I1121 13:39:59.722147 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" podUID="872d6e82-4322-4b06-a8e1-c3f23aea4c45" containerName="registry" containerID="cri-o://0de2b69d603c6b183b182b4277fc93a97aa05ef11bce861979b7a3bdf3bbdd7a" gracePeriod=30 Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.046891 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.122554 4675 generic.go:334] "Generic (PLEG): container finished" podID="872d6e82-4322-4b06-a8e1-c3f23aea4c45" containerID="0de2b69d603c6b183b182b4277fc93a97aa05ef11bce861979b7a3bdf3bbdd7a" exitCode=0 Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.122595 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" event={"ID":"872d6e82-4322-4b06-a8e1-c3f23aea4c45","Type":"ContainerDied","Data":"0de2b69d603c6b183b182b4277fc93a97aa05ef11bce861979b7a3bdf3bbdd7a"} Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.122625 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" event={"ID":"872d6e82-4322-4b06-a8e1-c3f23aea4c45","Type":"ContainerDied","Data":"b395f991d10da316dc45a6f1053da3b70290f86388196ea783159bb5f6715418"} Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.122640 4675 scope.go:117] "RemoveContainer" containerID="0de2b69d603c6b183b182b4277fc93a97aa05ef11bce861979b7a3bdf3bbdd7a" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.122656 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4nd8c" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.136894 4675 scope.go:117] "RemoveContainer" containerID="0de2b69d603c6b183b182b4277fc93a97aa05ef11bce861979b7a3bdf3bbdd7a" Nov 21 13:40:00 crc kubenswrapper[4675]: E1121 13:40:00.137321 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0de2b69d603c6b183b182b4277fc93a97aa05ef11bce861979b7a3bdf3bbdd7a\": container with ID starting with 0de2b69d603c6b183b182b4277fc93a97aa05ef11bce861979b7a3bdf3bbdd7a not found: ID does not exist" containerID="0de2b69d603c6b183b182b4277fc93a97aa05ef11bce861979b7a3bdf3bbdd7a" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.137357 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de2b69d603c6b183b182b4277fc93a97aa05ef11bce861979b7a3bdf3bbdd7a"} err="failed to get container status \"0de2b69d603c6b183b182b4277fc93a97aa05ef11bce861979b7a3bdf3bbdd7a\": rpc error: code = NotFound desc = could not find container \"0de2b69d603c6b183b182b4277fc93a97aa05ef11bce861979b7a3bdf3bbdd7a\": container with ID starting with 0de2b69d603c6b183b182b4277fc93a97aa05ef11bce861979b7a3bdf3bbdd7a not found: ID does not exist" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.187486 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/872d6e82-4322-4b06-a8e1-c3f23aea4c45-installation-pull-secrets\") pod \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.187528 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-registry-tls\") pod \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.187626 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/872d6e82-4322-4b06-a8e1-c3f23aea4c45-trusted-ca\") pod \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.187818 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.187841 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4fwq\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-kube-api-access-z4fwq\") pod \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.187883 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/872d6e82-4322-4b06-a8e1-c3f23aea4c45-registry-certificates\") pod \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.187911 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/872d6e82-4322-4b06-a8e1-c3f23aea4c45-ca-trust-extracted\") pod \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.187932 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-bound-sa-token\") pod \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\" (UID: \"872d6e82-4322-4b06-a8e1-c3f23aea4c45\") " Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.188855 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/872d6e82-4322-4b06-a8e1-c3f23aea4c45-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "872d6e82-4322-4b06-a8e1-c3f23aea4c45" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.188923 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/872d6e82-4322-4b06-a8e1-c3f23aea4c45-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "872d6e82-4322-4b06-a8e1-c3f23aea4c45" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.194359 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-kube-api-access-z4fwq" (OuterVolumeSpecName: "kube-api-access-z4fwq") pod "872d6e82-4322-4b06-a8e1-c3f23aea4c45" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45"). InnerVolumeSpecName "kube-api-access-z4fwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.194432 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872d6e82-4322-4b06-a8e1-c3f23aea4c45-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "872d6e82-4322-4b06-a8e1-c3f23aea4c45" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.194539 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "872d6e82-4322-4b06-a8e1-c3f23aea4c45" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.198161 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "872d6e82-4322-4b06-a8e1-c3f23aea4c45" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.201279 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "872d6e82-4322-4b06-a8e1-c3f23aea4c45" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.205884 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/872d6e82-4322-4b06-a8e1-c3f23aea4c45-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "872d6e82-4322-4b06-a8e1-c3f23aea4c45" (UID: "872d6e82-4322-4b06-a8e1-c3f23aea4c45"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.289155 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4fwq\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-kube-api-access-z4fwq\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.289201 4675 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/872d6e82-4322-4b06-a8e1-c3f23aea4c45-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.289272 4675 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/872d6e82-4322-4b06-a8e1-c3f23aea4c45-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.289285 4675 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.289297 4675 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/872d6e82-4322-4b06-a8e1-c3f23aea4c45-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.289307 4675 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/872d6e82-4322-4b06-a8e1-c3f23aea4c45-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.289319 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/872d6e82-4322-4b06-a8e1-c3f23aea4c45-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.452004 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nd8c"] Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.455974 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nd8c"] Nov 21 13:40:00 crc kubenswrapper[4675]: I1121 13:40:00.856470 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="872d6e82-4322-4b06-a8e1-c3f23aea4c45" path="/var/lib/kubelet/pods/872d6e82-4322-4b06-a8e1-c3f23aea4c45/volumes" Nov 21 13:40:06 crc kubenswrapper[4675]: I1121 13:40:06.094610 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-cz299" podUID="0d4777cf-9799-450d-a46f-5d5bedeaa706" containerName="console" containerID="cri-o://8ccdf547d3c6c113c3305af5a174b1aed79f6ed01ef6c42a524275de651beaf1" gracePeriod=15 Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.003685 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cz299_0d4777cf-9799-450d-a46f-5d5bedeaa706/console/0.log" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.003902 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.173513 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cz299_0d4777cf-9799-450d-a46f-5d5bedeaa706/console/0.log" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.173581 4675 generic.go:334] "Generic (PLEG): container finished" podID="0d4777cf-9799-450d-a46f-5d5bedeaa706" containerID="8ccdf547d3c6c113c3305af5a174b1aed79f6ed01ef6c42a524275de651beaf1" exitCode=2 Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.173620 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cz299" event={"ID":"0d4777cf-9799-450d-a46f-5d5bedeaa706","Type":"ContainerDied","Data":"8ccdf547d3c6c113c3305af5a174b1aed79f6ed01ef6c42a524275de651beaf1"} Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.173656 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cz299" event={"ID":"0d4777cf-9799-450d-a46f-5d5bedeaa706","Type":"ContainerDied","Data":"341df6969631cd76bd1c5e1eb6f8a4c8a0ea567c4a1aa174b0b64b5a8d0889e6"} Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.173681 4675 scope.go:117] "RemoveContainer" containerID="8ccdf547d3c6c113c3305af5a174b1aed79f6ed01ef6c42a524275de651beaf1" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.173824 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cz299" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.191883 4675 scope.go:117] "RemoveContainer" containerID="8ccdf547d3c6c113c3305af5a174b1aed79f6ed01ef6c42a524275de651beaf1" Nov 21 13:40:07 crc kubenswrapper[4675]: E1121 13:40:07.192523 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ccdf547d3c6c113c3305af5a174b1aed79f6ed01ef6c42a524275de651beaf1\": container with ID starting with 8ccdf547d3c6c113c3305af5a174b1aed79f6ed01ef6c42a524275de651beaf1 not found: ID does not exist" containerID="8ccdf547d3c6c113c3305af5a174b1aed79f6ed01ef6c42a524275de651beaf1" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.192576 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ccdf547d3c6c113c3305af5a174b1aed79f6ed01ef6c42a524275de651beaf1"} err="failed to get container status \"8ccdf547d3c6c113c3305af5a174b1aed79f6ed01ef6c42a524275de651beaf1\": rpc error: code = NotFound desc = could not find container \"8ccdf547d3c6c113c3305af5a174b1aed79f6ed01ef6c42a524275de651beaf1\": container with ID starting with 8ccdf547d3c6c113c3305af5a174b1aed79f6ed01ef6c42a524275de651beaf1 not found: ID does not exist" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.194174 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-serving-cert\") pod \"0d4777cf-9799-450d-a46f-5d5bedeaa706\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.194279 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-trusted-ca-bundle\") pod \"0d4777cf-9799-450d-a46f-5d5bedeaa706\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.194352 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-oauth-config\") pod \"0d4777cf-9799-450d-a46f-5d5bedeaa706\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.194392 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-service-ca\") pod \"0d4777cf-9799-450d-a46f-5d5bedeaa706\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.194432 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcv9n\" (UniqueName: \"kubernetes.io/projected/0d4777cf-9799-450d-a46f-5d5bedeaa706-kube-api-access-xcv9n\") pod \"0d4777cf-9799-450d-a46f-5d5bedeaa706\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.194456 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-config\") pod \"0d4777cf-9799-450d-a46f-5d5bedeaa706\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.194484 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-oauth-serving-cert\") pod \"0d4777cf-9799-450d-a46f-5d5bedeaa706\" (UID: \"0d4777cf-9799-450d-a46f-5d5bedeaa706\") " Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.195502 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-service-ca" (OuterVolumeSpecName: "service-ca") pod "0d4777cf-9799-450d-a46f-5d5bedeaa706" (UID: "0d4777cf-9799-450d-a46f-5d5bedeaa706"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.195556 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0d4777cf-9799-450d-a46f-5d5bedeaa706" (UID: "0d4777cf-9799-450d-a46f-5d5bedeaa706"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.195586 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-config" (OuterVolumeSpecName: "console-config") pod "0d4777cf-9799-450d-a46f-5d5bedeaa706" (UID: "0d4777cf-9799-450d-a46f-5d5bedeaa706"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.195796 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0d4777cf-9799-450d-a46f-5d5bedeaa706" (UID: "0d4777cf-9799-450d-a46f-5d5bedeaa706"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.207751 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0d4777cf-9799-450d-a46f-5d5bedeaa706" (UID: "0d4777cf-9799-450d-a46f-5d5bedeaa706"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.207785 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d4777cf-9799-450d-a46f-5d5bedeaa706-kube-api-access-xcv9n" (OuterVolumeSpecName: "kube-api-access-xcv9n") pod "0d4777cf-9799-450d-a46f-5d5bedeaa706" (UID: "0d4777cf-9799-450d-a46f-5d5bedeaa706"). InnerVolumeSpecName "kube-api-access-xcv9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.209031 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0d4777cf-9799-450d-a46f-5d5bedeaa706" (UID: "0d4777cf-9799-450d-a46f-5d5bedeaa706"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.296711 4675 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.296762 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.296775 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcv9n\" (UniqueName: \"kubernetes.io/projected/0d4777cf-9799-450d-a46f-5d5bedeaa706-kube-api-access-xcv9n\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.296787 4675 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.296799 4675 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.296812 4675 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4777cf-9799-450d-a46f-5d5bedeaa706-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.296821 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4777cf-9799-450d-a46f-5d5bedeaa706-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.507232 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cz299"] Nov 21 13:40:07 crc kubenswrapper[4675]: I1121 13:40:07.508111 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-cz299"] Nov 21 13:40:08 crc kubenswrapper[4675]: I1121 13:40:08.855979 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d4777cf-9799-450d-a46f-5d5bedeaa706" path="/var/lib/kubelet/pods/0d4777cf-9799-450d-a46f-5d5bedeaa706/volumes" Nov 21 13:40:11 crc kubenswrapper[4675]: I1121 13:40:11.420395 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:40:11 crc kubenswrapper[4675]: I1121 13:40:11.426251 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-68c5fdf698-xvbg2" Nov 21 13:40:32 crc kubenswrapper[4675]: I1121 13:40:32.674996 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:40:32 crc kubenswrapper[4675]: I1121 13:40:32.719476 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:40:33 crc kubenswrapper[4675]: I1121 13:40:33.391585 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Nov 21 13:41:15 crc kubenswrapper[4675]: I1121 13:41:15.811805 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5cf4fc8679-kzt87"] Nov 21 13:41:15 crc kubenswrapper[4675]: E1121 13:41:15.812662 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d4777cf-9799-450d-a46f-5d5bedeaa706" containerName="console" Nov 21 13:41:15 crc kubenswrapper[4675]: I1121 13:41:15.812678 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d4777cf-9799-450d-a46f-5d5bedeaa706" containerName="console" Nov 21 13:41:15 crc kubenswrapper[4675]: E1121 13:41:15.812691 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872d6e82-4322-4b06-a8e1-c3f23aea4c45" containerName="registry" Nov 21 13:41:15 crc kubenswrapper[4675]: I1121 13:41:15.812698 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="872d6e82-4322-4b06-a8e1-c3f23aea4c45" containerName="registry" Nov 21 13:41:15 crc kubenswrapper[4675]: I1121 13:41:15.812810 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="872d6e82-4322-4b06-a8e1-c3f23aea4c45" containerName="registry" Nov 21 13:41:15 crc kubenswrapper[4675]: I1121 13:41:15.812825 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d4777cf-9799-450d-a46f-5d5bedeaa706" containerName="console" Nov 21 13:41:15 crc kubenswrapper[4675]: I1121 13:41:15.813434 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:15 crc kubenswrapper[4675]: I1121 13:41:15.816451 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cf4fc8679-kzt87"] Nov 21 13:41:15 crc kubenswrapper[4675]: I1121 13:41:15.924461 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-trusted-ca-bundle\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:15 crc kubenswrapper[4675]: I1121 13:41:15.924510 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-oauth-config\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:15 crc kubenswrapper[4675]: I1121 13:41:15.924530 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-oauth-serving-cert\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:15 crc kubenswrapper[4675]: I1121 13:41:15.924559 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-config\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:15 crc kubenswrapper[4675]: I1121 13:41:15.924728 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-service-ca\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:15 crc kubenswrapper[4675]: I1121 13:41:15.924778 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-serving-cert\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:15 crc kubenswrapper[4675]: I1121 13:41:15.924994 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5xxw\" (UniqueName: \"kubernetes.io/projected/d155dc03-6e0a-4668-85e4-26b01f5df8c8-kube-api-access-t5xxw\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.026263 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5xxw\" (UniqueName: \"kubernetes.io/projected/d155dc03-6e0a-4668-85e4-26b01f5df8c8-kube-api-access-t5xxw\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.026326 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-trusted-ca-bundle\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.026351 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-oauth-config\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.026367 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-oauth-serving-cert\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.026562 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-config\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.026581 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-service-ca\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.026597 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-serving-cert\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.027904 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-oauth-serving-cert\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.028492 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-service-ca\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.028740 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-trusted-ca-bundle\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.029119 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-config\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.033992 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-serving-cert\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.034229 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-oauth-config\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.052976 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5xxw\" (UniqueName: \"kubernetes.io/projected/d155dc03-6e0a-4668-85e4-26b01f5df8c8-kube-api-access-t5xxw\") pod \"console-5cf4fc8679-kzt87\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.131850 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.136522 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.136563 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.347312 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cf4fc8679-kzt87"] Nov 21 13:41:16 crc kubenswrapper[4675]: I1121 13:41:16.619705 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cf4fc8679-kzt87" event={"ID":"d155dc03-6e0a-4668-85e4-26b01f5df8c8","Type":"ContainerStarted","Data":"3fa089206e5e791627465bb44b5c4a886512c183b590e14f4f4adf9760ea9ec0"} Nov 21 13:41:17 crc kubenswrapper[4675]: I1121 13:41:17.626608 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cf4fc8679-kzt87" event={"ID":"d155dc03-6e0a-4668-85e4-26b01f5df8c8","Type":"ContainerStarted","Data":"334767b87ce464bbb0ca3ed6a63b5a056d417114de5c6f73521e06eca7a0cff4"} Nov 21 13:41:17 crc kubenswrapper[4675]: I1121 13:41:17.646630 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5cf4fc8679-kzt87" podStartSLOduration=2.64660878 podStartE2EDuration="2.64660878s" podCreationTimestamp="2025-11-21 13:41:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:41:17.643000296 +0000 UTC m=+554.369415023" watchObservedRunningTime="2025-11-21 13:41:17.64660878 +0000 UTC m=+554.373023527" Nov 21 13:41:26 crc kubenswrapper[4675]: I1121 13:41:26.133374 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:26 crc kubenswrapper[4675]: I1121 13:41:26.133806 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:26 crc kubenswrapper[4675]: I1121 13:41:26.139908 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:26 crc kubenswrapper[4675]: I1121 13:41:26.687139 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:41:26 crc kubenswrapper[4675]: I1121 13:41:26.730039 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b67d74f5d-4dn67"] Nov 21 13:41:46 crc kubenswrapper[4675]: I1121 13:41:46.135926 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:41:46 crc kubenswrapper[4675]: I1121 13:41:46.136657 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:41:51 crc kubenswrapper[4675]: I1121 13:41:51.777268 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5b67d74f5d-4dn67" podUID="bc475023-1d7f-46cc-a70a-489c5c1b6643" containerName="console" containerID="cri-o://0dcf0122533f4fa005057cf08479c4f324116b347fb78bedc1827e1ff93bb474" gracePeriod=15 Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.199884 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b67d74f5d-4dn67_bc475023-1d7f-46cc-a70a-489c5c1b6643/console/0.log" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.200165 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.228654 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-oauth-config\") pod \"bc475023-1d7f-46cc-a70a-489c5c1b6643\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.228746 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-serving-cert\") pod \"bc475023-1d7f-46cc-a70a-489c5c1b6643\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.228781 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-trusted-ca-bundle\") pod \"bc475023-1d7f-46cc-a70a-489c5c1b6643\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.228820 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-config\") pod \"bc475023-1d7f-46cc-a70a-489c5c1b6643\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.228861 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-oauth-serving-cert\") pod \"bc475023-1d7f-46cc-a70a-489c5c1b6643\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.228887 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-service-ca\") pod \"bc475023-1d7f-46cc-a70a-489c5c1b6643\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.228907 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfhbd\" (UniqueName: \"kubernetes.io/projected/bc475023-1d7f-46cc-a70a-489c5c1b6643-kube-api-access-hfhbd\") pod \"bc475023-1d7f-46cc-a70a-489c5c1b6643\" (UID: \"bc475023-1d7f-46cc-a70a-489c5c1b6643\") " Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.229618 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bc475023-1d7f-46cc-a70a-489c5c1b6643" (UID: "bc475023-1d7f-46cc-a70a-489c5c1b6643"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.229627 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-config" (OuterVolumeSpecName: "console-config") pod "bc475023-1d7f-46cc-a70a-489c5c1b6643" (UID: "bc475023-1d7f-46cc-a70a-489c5c1b6643"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.229719 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bc475023-1d7f-46cc-a70a-489c5c1b6643" (UID: "bc475023-1d7f-46cc-a70a-489c5c1b6643"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.229884 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-service-ca" (OuterVolumeSpecName: "service-ca") pod "bc475023-1d7f-46cc-a70a-489c5c1b6643" (UID: "bc475023-1d7f-46cc-a70a-489c5c1b6643"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.234253 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc475023-1d7f-46cc-a70a-489c5c1b6643-kube-api-access-hfhbd" (OuterVolumeSpecName: "kube-api-access-hfhbd") pod "bc475023-1d7f-46cc-a70a-489c5c1b6643" (UID: "bc475023-1d7f-46cc-a70a-489c5c1b6643"). InnerVolumeSpecName "kube-api-access-hfhbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.234271 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bc475023-1d7f-46cc-a70a-489c5c1b6643" (UID: "bc475023-1d7f-46cc-a70a-489c5c1b6643"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.234338 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bc475023-1d7f-46cc-a70a-489c5c1b6643" (UID: "bc475023-1d7f-46cc-a70a-489c5c1b6643"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.329695 4675 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.329737 4675 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.329747 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.329790 4675 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.329801 4675 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.329810 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc475023-1d7f-46cc-a70a-489c5c1b6643-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.329818 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfhbd\" (UniqueName: \"kubernetes.io/projected/bc475023-1d7f-46cc-a70a-489c5c1b6643-kube-api-access-hfhbd\") on node \"crc\" DevicePath \"\"" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.828565 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b67d74f5d-4dn67_bc475023-1d7f-46cc-a70a-489c5c1b6643/console/0.log" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.828614 4675 generic.go:334] "Generic (PLEG): container finished" podID="bc475023-1d7f-46cc-a70a-489c5c1b6643" containerID="0dcf0122533f4fa005057cf08479c4f324116b347fb78bedc1827e1ff93bb474" exitCode=2 Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.828642 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b67d74f5d-4dn67" event={"ID":"bc475023-1d7f-46cc-a70a-489c5c1b6643","Type":"ContainerDied","Data":"0dcf0122533f4fa005057cf08479c4f324116b347fb78bedc1827e1ff93bb474"} Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.828668 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b67d74f5d-4dn67" event={"ID":"bc475023-1d7f-46cc-a70a-489c5c1b6643","Type":"ContainerDied","Data":"57ad39481a7daf3ad3bff402c79c7af15cec5d836a6a98fa4f5dc178c3c33eb1"} Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.828700 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b67d74f5d-4dn67" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.828706 4675 scope.go:117] "RemoveContainer" containerID="0dcf0122533f4fa005057cf08479c4f324116b347fb78bedc1827e1ff93bb474" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.857526 4675 scope.go:117] "RemoveContainer" containerID="0dcf0122533f4fa005057cf08479c4f324116b347fb78bedc1827e1ff93bb474" Nov 21 13:41:52 crc kubenswrapper[4675]: E1121 13:41:52.857938 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dcf0122533f4fa005057cf08479c4f324116b347fb78bedc1827e1ff93bb474\": container with ID starting with 0dcf0122533f4fa005057cf08479c4f324116b347fb78bedc1827e1ff93bb474 not found: ID does not exist" containerID="0dcf0122533f4fa005057cf08479c4f324116b347fb78bedc1827e1ff93bb474" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.857989 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dcf0122533f4fa005057cf08479c4f324116b347fb78bedc1827e1ff93bb474"} err="failed to get container status \"0dcf0122533f4fa005057cf08479c4f324116b347fb78bedc1827e1ff93bb474\": rpc error: code = NotFound desc = could not find container \"0dcf0122533f4fa005057cf08479c4f324116b347fb78bedc1827e1ff93bb474\": container with ID starting with 0dcf0122533f4fa005057cf08479c4f324116b347fb78bedc1827e1ff93bb474 not found: ID does not exist" Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.859808 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b67d74f5d-4dn67"] Nov 21 13:41:52 crc kubenswrapper[4675]: I1121 13:41:52.863759 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5b67d74f5d-4dn67"] Nov 21 13:41:54 crc kubenswrapper[4675]: I1121 13:41:54.855536 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc475023-1d7f-46cc-a70a-489c5c1b6643" path="/var/lib/kubelet/pods/bc475023-1d7f-46cc-a70a-489c5c1b6643/volumes" Nov 21 13:42:16 crc kubenswrapper[4675]: I1121 13:42:16.136676 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:42:16 crc kubenswrapper[4675]: I1121 13:42:16.137317 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:42:16 crc kubenswrapper[4675]: I1121 13:42:16.137378 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:42:16 crc kubenswrapper[4675]: I1121 13:42:16.138420 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2db3d60559e3e1b30d576e8b6d70d42aa99aac71aa518a3a570555f006efdc7"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:42:16 crc kubenswrapper[4675]: I1121 13:42:16.138494 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://e2db3d60559e3e1b30d576e8b6d70d42aa99aac71aa518a3a570555f006efdc7" gracePeriod=600 Nov 21 13:42:16 crc kubenswrapper[4675]: I1121 13:42:16.974441 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="e2db3d60559e3e1b30d576e8b6d70d42aa99aac71aa518a3a570555f006efdc7" exitCode=0 Nov 21 13:42:16 crc kubenswrapper[4675]: I1121 13:42:16.974539 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"e2db3d60559e3e1b30d576e8b6d70d42aa99aac71aa518a3a570555f006efdc7"} Nov 21 13:42:16 crc kubenswrapper[4675]: I1121 13:42:16.974701 4675 scope.go:117] "RemoveContainer" containerID="856f537a5bd9ff3fb685a8ef1888397fd35d05ca4e4e1461b5e1c59414d0ee63" Nov 21 13:42:17 crc kubenswrapper[4675]: I1121 13:42:17.981762 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"ebf6c1f49ce87c01f637a7eb4718589a49885f8f4445c9b07de3609e62a4334b"} Nov 21 13:44:46 crc kubenswrapper[4675]: I1121 13:44:46.136433 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:44:46 crc kubenswrapper[4675]: I1121 13:44:46.137081 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.139085 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4"] Nov 21 13:45:00 crc kubenswrapper[4675]: E1121 13:45:00.139891 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc475023-1d7f-46cc-a70a-489c5c1b6643" containerName="console" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.139908 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc475023-1d7f-46cc-a70a-489c5c1b6643" containerName="console" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.140110 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc475023-1d7f-46cc-a70a-489c5c1b6643" containerName="console" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.140565 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.143376 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.143500 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.195660 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4"] Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.221175 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67814e68-91a8-44c3-801d-77bfa9ffc9b0-config-volume\") pod \"collect-profiles-29395545-xqzt4\" (UID: \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.221218 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67814e68-91a8-44c3-801d-77bfa9ffc9b0-secret-volume\") pod \"collect-profiles-29395545-xqzt4\" (UID: \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.221252 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jz5n\" (UniqueName: \"kubernetes.io/projected/67814e68-91a8-44c3-801d-77bfa9ffc9b0-kube-api-access-8jz5n\") pod \"collect-profiles-29395545-xqzt4\" (UID: \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.322961 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67814e68-91a8-44c3-801d-77bfa9ffc9b0-config-volume\") pod \"collect-profiles-29395545-xqzt4\" (UID: \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.323026 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67814e68-91a8-44c3-801d-77bfa9ffc9b0-secret-volume\") pod \"collect-profiles-29395545-xqzt4\" (UID: \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.323103 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jz5n\" (UniqueName: \"kubernetes.io/projected/67814e68-91a8-44c3-801d-77bfa9ffc9b0-kube-api-access-8jz5n\") pod \"collect-profiles-29395545-xqzt4\" (UID: \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.324232 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67814e68-91a8-44c3-801d-77bfa9ffc9b0-config-volume\") pod \"collect-profiles-29395545-xqzt4\" (UID: \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.330166 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67814e68-91a8-44c3-801d-77bfa9ffc9b0-secret-volume\") pod \"collect-profiles-29395545-xqzt4\" (UID: \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.344320 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jz5n\" (UniqueName: \"kubernetes.io/projected/67814e68-91a8-44c3-801d-77bfa9ffc9b0-kube-api-access-8jz5n\") pod \"collect-profiles-29395545-xqzt4\" (UID: \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.505863 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.712879 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4"] Nov 21 13:45:00 crc kubenswrapper[4675]: I1121 13:45:00.997506 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" event={"ID":"67814e68-91a8-44c3-801d-77bfa9ffc9b0","Type":"ContainerStarted","Data":"9e767b12647e8bc1675ce7f1e1d5eea570ea94063fbb8935453f06965d970936"} Nov 21 13:45:02 crc kubenswrapper[4675]: I1121 13:45:02.005717 4675 generic.go:334] "Generic (PLEG): container finished" podID="67814e68-91a8-44c3-801d-77bfa9ffc9b0" containerID="4d45fc3cc395872483f5b3858554947237c0f85e2268c0cb2c79da7473875041" exitCode=0 Nov 21 13:45:02 crc kubenswrapper[4675]: I1121 13:45:02.005761 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" event={"ID":"67814e68-91a8-44c3-801d-77bfa9ffc9b0","Type":"ContainerDied","Data":"4d45fc3cc395872483f5b3858554947237c0f85e2268c0cb2c79da7473875041"} Nov 21 13:45:03 crc kubenswrapper[4675]: I1121 13:45:03.204003 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" Nov 21 13:45:03 crc kubenswrapper[4675]: I1121 13:45:03.263292 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67814e68-91a8-44c3-801d-77bfa9ffc9b0-config-volume\") pod \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\" (UID: \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\") " Nov 21 13:45:03 crc kubenswrapper[4675]: I1121 13:45:03.263355 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jz5n\" (UniqueName: \"kubernetes.io/projected/67814e68-91a8-44c3-801d-77bfa9ffc9b0-kube-api-access-8jz5n\") pod \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\" (UID: \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\") " Nov 21 13:45:03 crc kubenswrapper[4675]: I1121 13:45:03.263443 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67814e68-91a8-44c3-801d-77bfa9ffc9b0-secret-volume\") pod \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\" (UID: \"67814e68-91a8-44c3-801d-77bfa9ffc9b0\") " Nov 21 13:45:03 crc kubenswrapper[4675]: I1121 13:45:03.263793 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67814e68-91a8-44c3-801d-77bfa9ffc9b0-config-volume" (OuterVolumeSpecName: "config-volume") pod "67814e68-91a8-44c3-801d-77bfa9ffc9b0" (UID: "67814e68-91a8-44c3-801d-77bfa9ffc9b0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:45:03 crc kubenswrapper[4675]: I1121 13:45:03.267834 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67814e68-91a8-44c3-801d-77bfa9ffc9b0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "67814e68-91a8-44c3-801d-77bfa9ffc9b0" (UID: "67814e68-91a8-44c3-801d-77bfa9ffc9b0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:45:03 crc kubenswrapper[4675]: I1121 13:45:03.268702 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67814e68-91a8-44c3-801d-77bfa9ffc9b0-kube-api-access-8jz5n" (OuterVolumeSpecName: "kube-api-access-8jz5n") pod "67814e68-91a8-44c3-801d-77bfa9ffc9b0" (UID: "67814e68-91a8-44c3-801d-77bfa9ffc9b0"). InnerVolumeSpecName "kube-api-access-8jz5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:45:03 crc kubenswrapper[4675]: I1121 13:45:03.365489 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67814e68-91a8-44c3-801d-77bfa9ffc9b0-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:03 crc kubenswrapper[4675]: I1121 13:45:03.365539 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67814e68-91a8-44c3-801d-77bfa9ffc9b0-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:03 crc kubenswrapper[4675]: I1121 13:45:03.365566 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jz5n\" (UniqueName: \"kubernetes.io/projected/67814e68-91a8-44c3-801d-77bfa9ffc9b0-kube-api-access-8jz5n\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:04 crc kubenswrapper[4675]: I1121 13:45:04.018271 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" event={"ID":"67814e68-91a8-44c3-801d-77bfa9ffc9b0","Type":"ContainerDied","Data":"9e767b12647e8bc1675ce7f1e1d5eea570ea94063fbb8935453f06965d970936"} Nov 21 13:45:04 crc kubenswrapper[4675]: I1121 13:45:04.018543 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e767b12647e8bc1675ce7f1e1d5eea570ea94063fbb8935453f06965d970936" Nov 21 13:45:04 crc kubenswrapper[4675]: I1121 13:45:04.018457 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4" Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.387284 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5872"] Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.389407 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" podUID="d16b4be5-ea4f-4d90-b2be-3e9582858283" containerName="controller-manager" containerID="cri-o://65efded9cd49d4a82730dd4387e26efc3f3d8f452eb52b9357bde651ecbfec00" gracePeriod=30 Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.470366 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c"] Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.470622 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" podUID="f9e36cf0-9784-40cc-bbe5-19cfbc5e8295" containerName="route-controller-manager" containerID="cri-o://02caa0c41294c79b196d7b97ef199b3928bbc153ef83857ad5d1ba093f479562" gracePeriod=30 Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.890486 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.899859 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.959581 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d16b4be5-ea4f-4d90-b2be-3e9582858283-serving-cert\") pod \"d16b4be5-ea4f-4d90-b2be-3e9582858283\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.959642 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-config\") pod \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.959678 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-proxy-ca-bundles\") pod \"d16b4be5-ea4f-4d90-b2be-3e9582858283\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.959714 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-client-ca\") pod \"d16b4be5-ea4f-4d90-b2be-3e9582858283\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.959746 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-serving-cert\") pod \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.959779 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwlmv\" (UniqueName: \"kubernetes.io/projected/d16b4be5-ea4f-4d90-b2be-3e9582858283-kube-api-access-lwlmv\") pod \"d16b4be5-ea4f-4d90-b2be-3e9582858283\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.959809 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-client-ca\") pod \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.959846 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9864d\" (UniqueName: \"kubernetes.io/projected/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-kube-api-access-9864d\") pod \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\" (UID: \"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295\") " Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.959885 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-config\") pod \"d16b4be5-ea4f-4d90-b2be-3e9582858283\" (UID: \"d16b4be5-ea4f-4d90-b2be-3e9582858283\") " Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.960865 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-client-ca" (OuterVolumeSpecName: "client-ca") pod "f9e36cf0-9784-40cc-bbe5-19cfbc5e8295" (UID: "f9e36cf0-9784-40cc-bbe5-19cfbc5e8295"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.961010 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-config" (OuterVolumeSpecName: "config") pod "d16b4be5-ea4f-4d90-b2be-3e9582858283" (UID: "d16b4be5-ea4f-4d90-b2be-3e9582858283"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.960976 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-client-ca" (OuterVolumeSpecName: "client-ca") pod "d16b4be5-ea4f-4d90-b2be-3e9582858283" (UID: "d16b4be5-ea4f-4d90-b2be-3e9582858283"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.961510 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-config" (OuterVolumeSpecName: "config") pod "f9e36cf0-9784-40cc-bbe5-19cfbc5e8295" (UID: "f9e36cf0-9784-40cc-bbe5-19cfbc5e8295"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.961883 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d16b4be5-ea4f-4d90-b2be-3e9582858283" (UID: "d16b4be5-ea4f-4d90-b2be-3e9582858283"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.966946 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-kube-api-access-9864d" (OuterVolumeSpecName: "kube-api-access-9864d") pod "f9e36cf0-9784-40cc-bbe5-19cfbc5e8295" (UID: "f9e36cf0-9784-40cc-bbe5-19cfbc5e8295"). InnerVolumeSpecName "kube-api-access-9864d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.967042 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f9e36cf0-9784-40cc-bbe5-19cfbc5e8295" (UID: "f9e36cf0-9784-40cc-bbe5-19cfbc5e8295"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.967599 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d16b4be5-ea4f-4d90-b2be-3e9582858283-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d16b4be5-ea4f-4d90-b2be-3e9582858283" (UID: "d16b4be5-ea4f-4d90-b2be-3e9582858283"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:45:09 crc kubenswrapper[4675]: I1121 13:45:09.971652 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16b4be5-ea4f-4d90-b2be-3e9582858283-kube-api-access-lwlmv" (OuterVolumeSpecName: "kube-api-access-lwlmv") pod "d16b4be5-ea4f-4d90-b2be-3e9582858283" (UID: "d16b4be5-ea4f-4d90-b2be-3e9582858283"). InnerVolumeSpecName "kube-api-access-lwlmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.050254 4675 generic.go:334] "Generic (PLEG): container finished" podID="f9e36cf0-9784-40cc-bbe5-19cfbc5e8295" containerID="02caa0c41294c79b196d7b97ef199b3928bbc153ef83857ad5d1ba093f479562" exitCode=0 Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.050317 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.050349 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" event={"ID":"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295","Type":"ContainerDied","Data":"02caa0c41294c79b196d7b97ef199b3928bbc153ef83857ad5d1ba093f479562"} Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.050383 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c" event={"ID":"f9e36cf0-9784-40cc-bbe5-19cfbc5e8295","Type":"ContainerDied","Data":"eae68c881ac28908c738a3bd0b5ff106e1622c9b4df98958cd72fb39e2ebc11a"} Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.050403 4675 scope.go:117] "RemoveContainer" containerID="02caa0c41294c79b196d7b97ef199b3928bbc153ef83857ad5d1ba093f479562" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.053434 4675 generic.go:334] "Generic (PLEG): container finished" podID="d16b4be5-ea4f-4d90-b2be-3e9582858283" containerID="65efded9cd49d4a82730dd4387e26efc3f3d8f452eb52b9357bde651ecbfec00" exitCode=0 Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.053476 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" event={"ID":"d16b4be5-ea4f-4d90-b2be-3e9582858283","Type":"ContainerDied","Data":"65efded9cd49d4a82730dd4387e26efc3f3d8f452eb52b9357bde651ecbfec00"} Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.053506 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" event={"ID":"d16b4be5-ea4f-4d90-b2be-3e9582858283","Type":"ContainerDied","Data":"0633c6dddd906c57e00713a2537869d0297e6bf380a4c9fbcb50b6bf374b2a6f"} Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.053557 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k5872" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.061886 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d16b4be5-ea4f-4d90-b2be-3e9582858283-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.061931 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.061943 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.061958 4675 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.061970 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.061981 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwlmv\" (UniqueName: \"kubernetes.io/projected/d16b4be5-ea4f-4d90-b2be-3e9582858283-kube-api-access-lwlmv\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.061993 4675 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.062003 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9864d\" (UniqueName: \"kubernetes.io/projected/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295-kube-api-access-9864d\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.062014 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d16b4be5-ea4f-4d90-b2be-3e9582858283-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.068429 4675 scope.go:117] "RemoveContainer" containerID="02caa0c41294c79b196d7b97ef199b3928bbc153ef83857ad5d1ba093f479562" Nov 21 13:45:10 crc kubenswrapper[4675]: E1121 13:45:10.068822 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02caa0c41294c79b196d7b97ef199b3928bbc153ef83857ad5d1ba093f479562\": container with ID starting with 02caa0c41294c79b196d7b97ef199b3928bbc153ef83857ad5d1ba093f479562 not found: ID does not exist" containerID="02caa0c41294c79b196d7b97ef199b3928bbc153ef83857ad5d1ba093f479562" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.068863 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02caa0c41294c79b196d7b97ef199b3928bbc153ef83857ad5d1ba093f479562"} err="failed to get container status \"02caa0c41294c79b196d7b97ef199b3928bbc153ef83857ad5d1ba093f479562\": rpc error: code = NotFound desc = could not find container \"02caa0c41294c79b196d7b97ef199b3928bbc153ef83857ad5d1ba093f479562\": container with ID starting with 02caa0c41294c79b196d7b97ef199b3928bbc153ef83857ad5d1ba093f479562 not found: ID does not exist" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.068889 4675 scope.go:117] "RemoveContainer" containerID="65efded9cd49d4a82730dd4387e26efc3f3d8f452eb52b9357bde651ecbfec00" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.086457 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5872"] Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.087948 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5872"] Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.089498 4675 scope.go:117] "RemoveContainer" containerID="65efded9cd49d4a82730dd4387e26efc3f3d8f452eb52b9357bde651ecbfec00" Nov 21 13:45:10 crc kubenswrapper[4675]: E1121 13:45:10.092693 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65efded9cd49d4a82730dd4387e26efc3f3d8f452eb52b9357bde651ecbfec00\": container with ID starting with 65efded9cd49d4a82730dd4387e26efc3f3d8f452eb52b9357bde651ecbfec00 not found: ID does not exist" containerID="65efded9cd49d4a82730dd4387e26efc3f3d8f452eb52b9357bde651ecbfec00" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.092742 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65efded9cd49d4a82730dd4387e26efc3f3d8f452eb52b9357bde651ecbfec00"} err="failed to get container status \"65efded9cd49d4a82730dd4387e26efc3f3d8f452eb52b9357bde651ecbfec00\": rpc error: code = NotFound desc = could not find container \"65efded9cd49d4a82730dd4387e26efc3f3d8f452eb52b9357bde651ecbfec00\": container with ID starting with 65efded9cd49d4a82730dd4387e26efc3f3d8f452eb52b9357bde651ecbfec00 not found: ID does not exist" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.097279 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c"] Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.100502 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kwl5c"] Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.855447 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d16b4be5-ea4f-4d90-b2be-3e9582858283" path="/var/lib/kubelet/pods/d16b4be5-ea4f-4d90-b2be-3e9582858283/volumes" Nov 21 13:45:10 crc kubenswrapper[4675]: I1121 13:45:10.856010 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9e36cf0-9784-40cc-bbe5-19cfbc5e8295" path="/var/lib/kubelet/pods/f9e36cf0-9784-40cc-bbe5-19cfbc5e8295/volumes" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.290916 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d556468fc-z8282"] Nov 21 13:45:11 crc kubenswrapper[4675]: E1121 13:45:11.291258 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e36cf0-9784-40cc-bbe5-19cfbc5e8295" containerName="route-controller-manager" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.291274 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e36cf0-9784-40cc-bbe5-19cfbc5e8295" containerName="route-controller-manager" Nov 21 13:45:11 crc kubenswrapper[4675]: E1121 13:45:11.291293 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67814e68-91a8-44c3-801d-77bfa9ffc9b0" containerName="collect-profiles" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.291299 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="67814e68-91a8-44c3-801d-77bfa9ffc9b0" containerName="collect-profiles" Nov 21 13:45:11 crc kubenswrapper[4675]: E1121 13:45:11.291311 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16b4be5-ea4f-4d90-b2be-3e9582858283" containerName="controller-manager" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.291317 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16b4be5-ea4f-4d90-b2be-3e9582858283" containerName="controller-manager" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.291411 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="67814e68-91a8-44c3-801d-77bfa9ffc9b0" containerName="collect-profiles" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.291419 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16b4be5-ea4f-4d90-b2be-3e9582858283" containerName="controller-manager" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.291427 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e36cf0-9784-40cc-bbe5-19cfbc5e8295" containerName="route-controller-manager" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.291921 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.293961 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.293986 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.294284 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.294420 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.294447 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.294530 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.295164 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77996fd7db-wb44g"] Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.296057 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.300697 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.301185 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.301354 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.301510 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.301675 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.301810 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77996fd7db-wb44g"] Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.306649 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d556468fc-z8282"] Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.307018 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.312563 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.382381 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-config\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.382425 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2gww\" (UniqueName: \"kubernetes.io/projected/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-kube-api-access-s2gww\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.382567 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-client-ca\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.382622 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ad4224-04ad-4cd0-96bc-e4fc51ecae4c-serving-cert\") pod \"route-controller-manager-d556468fc-z8282\" (UID: \"08ad4224-04ad-4cd0-96bc-e4fc51ecae4c\") " pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.382703 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08ad4224-04ad-4cd0-96bc-e4fc51ecae4c-client-ca\") pod \"route-controller-manager-d556468fc-z8282\" (UID: \"08ad4224-04ad-4cd0-96bc-e4fc51ecae4c\") " pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.382798 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq7zz\" (UniqueName: \"kubernetes.io/projected/08ad4224-04ad-4cd0-96bc-e4fc51ecae4c-kube-api-access-sq7zz\") pod \"route-controller-manager-d556468fc-z8282\" (UID: \"08ad4224-04ad-4cd0-96bc-e4fc51ecae4c\") " pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.382834 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ad4224-04ad-4cd0-96bc-e4fc51ecae4c-config\") pod \"route-controller-manager-d556468fc-z8282\" (UID: \"08ad4224-04ad-4cd0-96bc-e4fc51ecae4c\") " pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.382993 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-serving-cert\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.383025 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-proxy-ca-bundles\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.484194 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-client-ca\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.484245 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ad4224-04ad-4cd0-96bc-e4fc51ecae4c-serving-cert\") pod \"route-controller-manager-d556468fc-z8282\" (UID: \"08ad4224-04ad-4cd0-96bc-e4fc51ecae4c\") " pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.484283 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08ad4224-04ad-4cd0-96bc-e4fc51ecae4c-client-ca\") pod \"route-controller-manager-d556468fc-z8282\" (UID: \"08ad4224-04ad-4cd0-96bc-e4fc51ecae4c\") " pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.484323 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq7zz\" (UniqueName: \"kubernetes.io/projected/08ad4224-04ad-4cd0-96bc-e4fc51ecae4c-kube-api-access-sq7zz\") pod \"route-controller-manager-d556468fc-z8282\" (UID: \"08ad4224-04ad-4cd0-96bc-e4fc51ecae4c\") " pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.484344 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ad4224-04ad-4cd0-96bc-e4fc51ecae4c-config\") pod \"route-controller-manager-d556468fc-z8282\" (UID: \"08ad4224-04ad-4cd0-96bc-e4fc51ecae4c\") " pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.484420 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-serving-cert\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.484442 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-proxy-ca-bundles\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.484477 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-config\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.484499 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2gww\" (UniqueName: \"kubernetes.io/projected/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-kube-api-access-s2gww\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.485416 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-client-ca\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.485763 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08ad4224-04ad-4cd0-96bc-e4fc51ecae4c-client-ca\") pod \"route-controller-manager-d556468fc-z8282\" (UID: \"08ad4224-04ad-4cd0-96bc-e4fc51ecae4c\") " pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.485995 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-proxy-ca-bundles\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.486120 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-config\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.486241 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ad4224-04ad-4cd0-96bc-e4fc51ecae4c-config\") pod \"route-controller-manager-d556468fc-z8282\" (UID: \"08ad4224-04ad-4cd0-96bc-e4fc51ecae4c\") " pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.490399 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ad4224-04ad-4cd0-96bc-e4fc51ecae4c-serving-cert\") pod \"route-controller-manager-d556468fc-z8282\" (UID: \"08ad4224-04ad-4cd0-96bc-e4fc51ecae4c\") " pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.499468 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-serving-cert\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.505410 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2gww\" (UniqueName: \"kubernetes.io/projected/97a2d7a8-9250-4f4b-8fa7-ae6f117e0460-kube-api-access-s2gww\") pod \"controller-manager-77996fd7db-wb44g\" (UID: \"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460\") " pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.508600 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq7zz\" (UniqueName: \"kubernetes.io/projected/08ad4224-04ad-4cd0-96bc-e4fc51ecae4c-kube-api-access-sq7zz\") pod \"route-controller-manager-d556468fc-z8282\" (UID: \"08ad4224-04ad-4cd0-96bc-e4fc51ecae4c\") " pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.622144 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.640521 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.831877 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d556468fc-z8282"] Nov 21 13:45:11 crc kubenswrapper[4675]: W1121 13:45:11.839442 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08ad4224_04ad_4cd0_96bc_e4fc51ecae4c.slice/crio-fec9c9a17f1b0f9f800a30a087d519a3a6cbc594aa4b6556b74dc3235ecb4292 WatchSource:0}: Error finding container fec9c9a17f1b0f9f800a30a087d519a3a6cbc594aa4b6556b74dc3235ecb4292: Status 404 returned error can't find the container with id fec9c9a17f1b0f9f800a30a087d519a3a6cbc594aa4b6556b74dc3235ecb4292 Nov 21 13:45:11 crc kubenswrapper[4675]: I1121 13:45:11.892383 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77996fd7db-wb44g"] Nov 21 13:45:11 crc kubenswrapper[4675]: W1121 13:45:11.897011 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97a2d7a8_9250_4f4b_8fa7_ae6f117e0460.slice/crio-ad44f340f5d6b6b3eab5b116b305ec754a8978914a229e325632230f12436314 WatchSource:0}: Error finding container ad44f340f5d6b6b3eab5b116b305ec754a8978914a229e325632230f12436314: Status 404 returned error can't find the container with id ad44f340f5d6b6b3eab5b116b305ec754a8978914a229e325632230f12436314 Nov 21 13:45:12 crc kubenswrapper[4675]: I1121 13:45:12.066774 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" event={"ID":"08ad4224-04ad-4cd0-96bc-e4fc51ecae4c","Type":"ContainerStarted","Data":"fec9c9a17f1b0f9f800a30a087d519a3a6cbc594aa4b6556b74dc3235ecb4292"} Nov 21 13:45:12 crc kubenswrapper[4675]: I1121 13:45:12.067833 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" event={"ID":"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460","Type":"ContainerStarted","Data":"ad44f340f5d6b6b3eab5b116b305ec754a8978914a229e325632230f12436314"} Nov 21 13:45:13 crc kubenswrapper[4675]: I1121 13:45:13.076604 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" event={"ID":"97a2d7a8-9250-4f4b-8fa7-ae6f117e0460","Type":"ContainerStarted","Data":"e61e9c2737094b5309836808497cad10267aee2c52c46c31ffaf69b69cf9fab6"} Nov 21 13:45:13 crc kubenswrapper[4675]: I1121 13:45:13.076944 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:13 crc kubenswrapper[4675]: I1121 13:45:13.083486 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" Nov 21 13:45:13 crc kubenswrapper[4675]: I1121 13:45:13.083631 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" event={"ID":"08ad4224-04ad-4cd0-96bc-e4fc51ecae4c","Type":"ContainerStarted","Data":"7bf60dfcb1108a81c89a38cbdf0c968a76f85de55176519b39b98f9eeb9760be"} Nov 21 13:45:13 crc kubenswrapper[4675]: I1121 13:45:13.084734 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:13 crc kubenswrapper[4675]: I1121 13:45:13.090030 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" Nov 21 13:45:13 crc kubenswrapper[4675]: I1121 13:45:13.098075 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77996fd7db-wb44g" podStartSLOduration=4.098027477 podStartE2EDuration="4.098027477s" podCreationTimestamp="2025-11-21 13:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:45:13.095484832 +0000 UTC m=+789.821899579" watchObservedRunningTime="2025-11-21 13:45:13.098027477 +0000 UTC m=+789.824442204" Nov 21 13:45:13 crc kubenswrapper[4675]: I1121 13:45:13.116973 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d556468fc-z8282" podStartSLOduration=4.116951132 podStartE2EDuration="4.116951132s" podCreationTimestamp="2025-11-21 13:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:45:13.111281067 +0000 UTC m=+789.837695814" watchObservedRunningTime="2025-11-21 13:45:13.116951132 +0000 UTC m=+789.843365859" Nov 21 13:45:16 crc kubenswrapper[4675]: I1121 13:45:16.136284 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:45:16 crc kubenswrapper[4675]: I1121 13:45:16.136953 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:45:18 crc kubenswrapper[4675]: I1121 13:45:18.291708 4675 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 21 13:45:46 crc kubenswrapper[4675]: I1121 13:45:46.136316 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:45:46 crc kubenswrapper[4675]: I1121 13:45:46.136886 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:45:46 crc kubenswrapper[4675]: I1121 13:45:46.136936 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:45:46 crc kubenswrapper[4675]: I1121 13:45:46.137551 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebf6c1f49ce87c01f637a7eb4718589a49885f8f4445c9b07de3609e62a4334b"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:45:46 crc kubenswrapper[4675]: I1121 13:45:46.137612 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://ebf6c1f49ce87c01f637a7eb4718589a49885f8f4445c9b07de3609e62a4334b" gracePeriod=600 Nov 21 13:45:47 crc kubenswrapper[4675]: I1121 13:45:47.295011 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="ebf6c1f49ce87c01f637a7eb4718589a49885f8f4445c9b07de3609e62a4334b" exitCode=0 Nov 21 13:45:47 crc kubenswrapper[4675]: I1121 13:45:47.295093 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"ebf6c1f49ce87c01f637a7eb4718589a49885f8f4445c9b07de3609e62a4334b"} Nov 21 13:45:47 crc kubenswrapper[4675]: I1121 13:45:47.295770 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"e4478a9785c2c0cd8603759bbdd163dd836f7c97363478e7200b2c21e3d3682a"} Nov 21 13:45:47 crc kubenswrapper[4675]: I1121 13:45:47.295799 4675 scope.go:117] "RemoveContainer" containerID="e2db3d60559e3e1b30d576e8b6d70d42aa99aac71aa518a3a570555f006efdc7" Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.097221 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn"] Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.099058 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.105332 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.108775 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn"] Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.186953 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4268fd55-2e1a-4ce3-b168-61ae292f22b9-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn\" (UID: \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.187347 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpzmz\" (UniqueName: \"kubernetes.io/projected/4268fd55-2e1a-4ce3-b168-61ae292f22b9-kube-api-access-xpzmz\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn\" (UID: \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.187446 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4268fd55-2e1a-4ce3-b168-61ae292f22b9-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn\" (UID: \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.288893 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4268fd55-2e1a-4ce3-b168-61ae292f22b9-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn\" (UID: \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.289033 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4268fd55-2e1a-4ce3-b168-61ae292f22b9-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn\" (UID: \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.289103 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpzmz\" (UniqueName: \"kubernetes.io/projected/4268fd55-2e1a-4ce3-b168-61ae292f22b9-kube-api-access-xpzmz\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn\" (UID: \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.289445 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4268fd55-2e1a-4ce3-b168-61ae292f22b9-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn\" (UID: \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.289655 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4268fd55-2e1a-4ce3-b168-61ae292f22b9-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn\" (UID: \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.318024 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpzmz\" (UniqueName: \"kubernetes.io/projected/4268fd55-2e1a-4ce3-b168-61ae292f22b9-kube-api-access-xpzmz\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn\" (UID: \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.428197 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" Nov 21 13:46:21 crc kubenswrapper[4675]: I1121 13:46:21.875140 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn"] Nov 21 13:46:22 crc kubenswrapper[4675]: I1121 13:46:22.506060 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" event={"ID":"4268fd55-2e1a-4ce3-b168-61ae292f22b9","Type":"ContainerStarted","Data":"4e97ebce10dfd43d2a0ca799ee3924cab314980789c5d5d8b79a41c8b0f343a7"} Nov 21 13:46:22 crc kubenswrapper[4675]: I1121 13:46:22.506408 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" event={"ID":"4268fd55-2e1a-4ce3-b168-61ae292f22b9","Type":"ContainerStarted","Data":"09d6e94027b425aec55dc974ba1d6166c31c1bd24414df91552924148ad04117"} Nov 21 13:46:23 crc kubenswrapper[4675]: I1121 13:46:23.512154 4675 generic.go:334] "Generic (PLEG): container finished" podID="4268fd55-2e1a-4ce3-b168-61ae292f22b9" containerID="4e97ebce10dfd43d2a0ca799ee3924cab314980789c5d5d8b79a41c8b0f343a7" exitCode=0 Nov 21 13:46:23 crc kubenswrapper[4675]: I1121 13:46:23.512239 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" event={"ID":"4268fd55-2e1a-4ce3-b168-61ae292f22b9","Type":"ContainerDied","Data":"4e97ebce10dfd43d2a0ca799ee3924cab314980789c5d5d8b79a41c8b0f343a7"} Nov 21 13:46:23 crc kubenswrapper[4675]: I1121 13:46:23.513738 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.463708 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rwc65"] Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.465238 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.479809 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwc65"] Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.542644 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shzxt\" (UniqueName: \"kubernetes.io/projected/d6ebe08f-b2ee-4612-bc20-39e31a66631e-kube-api-access-shzxt\") pod \"redhat-marketplace-rwc65\" (UID: \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\") " pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.542744 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ebe08f-b2ee-4612-bc20-39e31a66631e-catalog-content\") pod \"redhat-marketplace-rwc65\" (UID: \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\") " pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.542821 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ebe08f-b2ee-4612-bc20-39e31a66631e-utilities\") pod \"redhat-marketplace-rwc65\" (UID: \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\") " pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.644117 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ebe08f-b2ee-4612-bc20-39e31a66631e-catalog-content\") pod \"redhat-marketplace-rwc65\" (UID: \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\") " pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.644367 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ebe08f-b2ee-4612-bc20-39e31a66631e-utilities\") pod \"redhat-marketplace-rwc65\" (UID: \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\") " pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.644503 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shzxt\" (UniqueName: \"kubernetes.io/projected/d6ebe08f-b2ee-4612-bc20-39e31a66631e-kube-api-access-shzxt\") pod \"redhat-marketplace-rwc65\" (UID: \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\") " pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.644823 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ebe08f-b2ee-4612-bc20-39e31a66631e-utilities\") pod \"redhat-marketplace-rwc65\" (UID: \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\") " pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.644840 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ebe08f-b2ee-4612-bc20-39e31a66631e-catalog-content\") pod \"redhat-marketplace-rwc65\" (UID: \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\") " pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.667302 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shzxt\" (UniqueName: \"kubernetes.io/projected/d6ebe08f-b2ee-4612-bc20-39e31a66631e-kube-api-access-shzxt\") pod \"redhat-marketplace-rwc65\" (UID: \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\") " pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.671782 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t978g"] Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.673180 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.684535 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t978g"] Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.745319 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d0828-fd52-4538-aa6e-f60e4d58f0c0-catalog-content\") pod \"redhat-operators-t978g\" (UID: \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\") " pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.745370 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7rhl\" (UniqueName: \"kubernetes.io/projected/406d0828-fd52-4538-aa6e-f60e4d58f0c0-kube-api-access-h7rhl\") pod \"redhat-operators-t978g\" (UID: \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\") " pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.745395 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d0828-fd52-4538-aa6e-f60e4d58f0c0-utilities\") pod \"redhat-operators-t978g\" (UID: \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\") " pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.795351 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.846716 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d0828-fd52-4538-aa6e-f60e4d58f0c0-catalog-content\") pod \"redhat-operators-t978g\" (UID: \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\") " pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.846783 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7rhl\" (UniqueName: \"kubernetes.io/projected/406d0828-fd52-4538-aa6e-f60e4d58f0c0-kube-api-access-h7rhl\") pod \"redhat-operators-t978g\" (UID: \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\") " pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.846818 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d0828-fd52-4538-aa6e-f60e4d58f0c0-utilities\") pod \"redhat-operators-t978g\" (UID: \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\") " pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.847308 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d0828-fd52-4538-aa6e-f60e4d58f0c0-utilities\") pod \"redhat-operators-t978g\" (UID: \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\") " pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.848573 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d0828-fd52-4538-aa6e-f60e4d58f0c0-catalog-content\") pod \"redhat-operators-t978g\" (UID: \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\") " pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:24 crc kubenswrapper[4675]: I1121 13:46:24.872230 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7rhl\" (UniqueName: \"kubernetes.io/projected/406d0828-fd52-4538-aa6e-f60e4d58f0c0-kube-api-access-h7rhl\") pod \"redhat-operators-t978g\" (UID: \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\") " pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:25 crc kubenswrapper[4675]: I1121 13:46:25.002594 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:25 crc kubenswrapper[4675]: I1121 13:46:25.213050 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t978g"] Nov 21 13:46:25 crc kubenswrapper[4675]: I1121 13:46:25.262657 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwc65"] Nov 21 13:46:25 crc kubenswrapper[4675]: W1121 13:46:25.897271 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6ebe08f_b2ee_4612_bc20_39e31a66631e.slice/crio-0c9185da6e67f7244ca5d942d919dbbffbf787cd8cf0f3c48ccf7542f3ab996f WatchSource:0}: Error finding container 0c9185da6e67f7244ca5d942d919dbbffbf787cd8cf0f3c48ccf7542f3ab996f: Status 404 returned error can't find the container with id 0c9185da6e67f7244ca5d942d919dbbffbf787cd8cf0f3c48ccf7542f3ab996f Nov 21 13:46:26 crc kubenswrapper[4675]: I1121 13:46:26.543328 4675 generic.go:334] "Generic (PLEG): container finished" podID="d6ebe08f-b2ee-4612-bc20-39e31a66631e" containerID="dc007c383010625dbbdb29898ba0038a30bd9d273d106e904cb31e9f51dc8cc8" exitCode=0 Nov 21 13:46:26 crc kubenswrapper[4675]: I1121 13:46:26.543432 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwc65" event={"ID":"d6ebe08f-b2ee-4612-bc20-39e31a66631e","Type":"ContainerDied","Data":"dc007c383010625dbbdb29898ba0038a30bd9d273d106e904cb31e9f51dc8cc8"} Nov 21 13:46:26 crc kubenswrapper[4675]: I1121 13:46:26.543591 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwc65" event={"ID":"d6ebe08f-b2ee-4612-bc20-39e31a66631e","Type":"ContainerStarted","Data":"0c9185da6e67f7244ca5d942d919dbbffbf787cd8cf0f3c48ccf7542f3ab996f"} Nov 21 13:46:26 crc kubenswrapper[4675]: I1121 13:46:26.551436 4675 generic.go:334] "Generic (PLEG): container finished" podID="406d0828-fd52-4538-aa6e-f60e4d58f0c0" containerID="bb819a355f379d770f1af38a4b89444f86e0cdbb7b4bae0f4e7ba84f934bf7f3" exitCode=0 Nov 21 13:46:26 crc kubenswrapper[4675]: I1121 13:46:26.551485 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t978g" event={"ID":"406d0828-fd52-4538-aa6e-f60e4d58f0c0","Type":"ContainerDied","Data":"bb819a355f379d770f1af38a4b89444f86e0cdbb7b4bae0f4e7ba84f934bf7f3"} Nov 21 13:46:26 crc kubenswrapper[4675]: I1121 13:46:26.551513 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t978g" event={"ID":"406d0828-fd52-4538-aa6e-f60e4d58f0c0","Type":"ContainerStarted","Data":"297d068bee7cf5fb8f6c03596fd9f0f32e9d68669d8712827fd78e47a9116781"} Nov 21 13:46:28 crc kubenswrapper[4675]: I1121 13:46:28.567940 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" event={"ID":"4268fd55-2e1a-4ce3-b168-61ae292f22b9","Type":"ContainerStarted","Data":"c73506d26649c0b482abdabd13ee6e453527c3f9c4be2ce563feb99597e1772d"} Nov 21 13:46:29 crc kubenswrapper[4675]: I1121 13:46:29.577112 4675 generic.go:334] "Generic (PLEG): container finished" podID="4268fd55-2e1a-4ce3-b168-61ae292f22b9" containerID="c73506d26649c0b482abdabd13ee6e453527c3f9c4be2ce563feb99597e1772d" exitCode=0 Nov 21 13:46:29 crc kubenswrapper[4675]: I1121 13:46:29.577154 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" event={"ID":"4268fd55-2e1a-4ce3-b168-61ae292f22b9","Type":"ContainerDied","Data":"c73506d26649c0b482abdabd13ee6e453527c3f9c4be2ce563feb99597e1772d"} Nov 21 13:46:31 crc kubenswrapper[4675]: I1121 13:46:31.846978 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w28jn"] Nov 21 13:46:31 crc kubenswrapper[4675]: I1121 13:46:31.848028 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovn-controller" containerID="cri-o://cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87" gracePeriod=30 Nov 21 13:46:31 crc kubenswrapper[4675]: I1121 13:46:31.848081 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="sbdb" containerID="cri-o://5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc" gracePeriod=30 Nov 21 13:46:31 crc kubenswrapper[4675]: I1121 13:46:31.848153 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3" gracePeriod=30 Nov 21 13:46:31 crc kubenswrapper[4675]: I1121 13:46:31.848220 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="kube-rbac-proxy-node" containerID="cri-o://8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e" gracePeriod=30 Nov 21 13:46:31 crc kubenswrapper[4675]: I1121 13:46:31.848201 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="nbdb" containerID="cri-o://73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4" gracePeriod=30 Nov 21 13:46:31 crc kubenswrapper[4675]: I1121 13:46:31.848299 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovn-acl-logging" containerID="cri-o://4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db" gracePeriod=30 Nov 21 13:46:31 crc kubenswrapper[4675]: I1121 13:46:31.848337 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="northd" containerID="cri-o://220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7" gracePeriod=30 Nov 21 13:46:31 crc kubenswrapper[4675]: I1121 13:46:31.901378 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" containerID="cri-o://c5f661d8d67aa7543bd09fe3d0b66402ebe6bac1a49d1de091718eed7ff1ace7" gracePeriod=30 Nov 21 13:46:32 crc kubenswrapper[4675]: I1121 13:46:32.606228 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwc65" event={"ID":"d6ebe08f-b2ee-4612-bc20-39e31a66631e","Type":"ContainerStarted","Data":"9633217f882ce98208c6f9c51c7292a1f797dc663f626a949f3c3f73a59fd5fd"} Nov 21 13:46:32 crc kubenswrapper[4675]: I1121 13:46:32.608762 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t978g" event={"ID":"406d0828-fd52-4538-aa6e-f60e4d58f0c0","Type":"ContainerStarted","Data":"8f8b4d31744a91964583d3e6fe960dc2b7c68a5696763e3105d56cb80b53f0fc"} Nov 21 13:46:32 crc kubenswrapper[4675]: I1121 13:46:32.612154 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/3.log" Nov 21 13:46:32 crc kubenswrapper[4675]: I1121 13:46:32.615379 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovn-acl-logging/0.log" Nov 21 13:46:32 crc kubenswrapper[4675]: I1121 13:46:32.617569 4675 generic.go:334] "Generic (PLEG): container finished" podID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerID="4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db" exitCode=143 Nov 21 13:46:32 crc kubenswrapper[4675]: I1121 13:46:32.617667 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerDied","Data":"4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db"} Nov 21 13:46:32 crc kubenswrapper[4675]: I1121 13:46:32.621599 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" event={"ID":"4268fd55-2e1a-4ce3-b168-61ae292f22b9","Type":"ContainerStarted","Data":"2a428780348f29797a3551814eca716921e45c8560b1ee7f24e1bb32dc82ce45"} Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.629617 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovnkube-controller/3.log" Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.632489 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovn-acl-logging/0.log" Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.633093 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovn-controller/0.log" Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.633458 4675 generic.go:334] "Generic (PLEG): container finished" podID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerID="c5f661d8d67aa7543bd09fe3d0b66402ebe6bac1a49d1de091718eed7ff1ace7" exitCode=0 Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.633487 4675 generic.go:334] "Generic (PLEG): container finished" podID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerID="5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc" exitCode=0 Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.633498 4675 generic.go:334] "Generic (PLEG): container finished" podID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerID="73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4" exitCode=0 Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.633508 4675 generic.go:334] "Generic (PLEG): container finished" podID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerID="220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7" exitCode=0 Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.633518 4675 generic.go:334] "Generic (PLEG): container finished" podID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerID="cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87" exitCode=143 Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.633534 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerDied","Data":"c5f661d8d67aa7543bd09fe3d0b66402ebe6bac1a49d1de091718eed7ff1ace7"} Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.633586 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerDied","Data":"5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc"} Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.633603 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerDied","Data":"73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4"} Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.633615 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerDied","Data":"220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7"} Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.633626 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerDied","Data":"cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87"} Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.633645 4675 scope.go:117] "RemoveContainer" containerID="6d591807b1f9040708210914516759b51d65b24f7c18bc992d5702437e7ea1b1" Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.643379 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsw5h_455c5b5a-917d-4361-bcc0-9283ffce0e86/kube-multus/2.log" Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.643971 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsw5h_455c5b5a-917d-4361-bcc0-9283ffce0e86/kube-multus/1.log" Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.644008 4675 generic.go:334] "Generic (PLEG): container finished" podID="455c5b5a-917d-4361-bcc0-9283ffce0e86" containerID="26aedf96f496e3744765f40ce0f4bd2ed20778645ad27ae30a179c9de8454a5f" exitCode=2 Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.644040 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsw5h" event={"ID":"455c5b5a-917d-4361-bcc0-9283ffce0e86","Type":"ContainerDied","Data":"26aedf96f496e3744765f40ce0f4bd2ed20778645ad27ae30a179c9de8454a5f"} Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.645157 4675 scope.go:117] "RemoveContainer" containerID="26aedf96f496e3744765f40ce0f4bd2ed20778645ad27ae30a179c9de8454a5f" Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.647324 4675 generic.go:334] "Generic (PLEG): container finished" podID="d6ebe08f-b2ee-4612-bc20-39e31a66631e" containerID="9633217f882ce98208c6f9c51c7292a1f797dc663f626a949f3c3f73a59fd5fd" exitCode=0 Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.647396 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwc65" event={"ID":"d6ebe08f-b2ee-4612-bc20-39e31a66631e","Type":"ContainerDied","Data":"9633217f882ce98208c6f9c51c7292a1f797dc663f626a949f3c3f73a59fd5fd"} Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.649447 4675 generic.go:334] "Generic (PLEG): container finished" podID="406d0828-fd52-4538-aa6e-f60e4d58f0c0" containerID="8f8b4d31744a91964583d3e6fe960dc2b7c68a5696763e3105d56cb80b53f0fc" exitCode=0 Nov 21 13:46:33 crc kubenswrapper[4675]: I1121 13:46:33.649532 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t978g" event={"ID":"406d0828-fd52-4538-aa6e-f60e4d58f0c0","Type":"ContainerDied","Data":"8f8b4d31744a91964583d3e6fe960dc2b7c68a5696763e3105d56cb80b53f0fc"} Nov 21 13:46:34 crc kubenswrapper[4675]: I1121 13:46:34.259132 4675 scope.go:117] "RemoveContainer" containerID="73609a4ba983edbb274ecf46c6f71f349a32f5aac48fc5b0c24a1acdd05e6674" Nov 21 13:46:34 crc kubenswrapper[4675]: I1121 13:46:34.663577 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovn-acl-logging/0.log" Nov 21 13:46:34 crc kubenswrapper[4675]: I1121 13:46:34.664350 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovn-controller/0.log" Nov 21 13:46:34 crc kubenswrapper[4675]: I1121 13:46:34.664941 4675 generic.go:334] "Generic (PLEG): container finished" podID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerID="f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3" exitCode=0 Nov 21 13:46:34 crc kubenswrapper[4675]: I1121 13:46:34.664974 4675 generic.go:334] "Generic (PLEG): container finished" podID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerID="8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e" exitCode=0 Nov 21 13:46:34 crc kubenswrapper[4675]: I1121 13:46:34.665084 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerDied","Data":"f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3"} Nov 21 13:46:34 crc kubenswrapper[4675]: I1121 13:46:34.665135 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerDied","Data":"8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e"} Nov 21 13:46:34 crc kubenswrapper[4675]: I1121 13:46:34.667623 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hsw5h_455c5b5a-917d-4361-bcc0-9283ffce0e86/kube-multus/2.log" Nov 21 13:46:34 crc kubenswrapper[4675]: I1121 13:46:34.667747 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hsw5h" event={"ID":"455c5b5a-917d-4361-bcc0-9283ffce0e86","Type":"ContainerStarted","Data":"b50dcb5164d737bb2513771cfd957c8df32ac516cfd8c422aa7777822538612a"} Nov 21 13:46:34 crc kubenswrapper[4675]: I1121 13:46:34.671096 4675 generic.go:334] "Generic (PLEG): container finished" podID="4268fd55-2e1a-4ce3-b168-61ae292f22b9" containerID="2a428780348f29797a3551814eca716921e45c8560b1ee7f24e1bb32dc82ce45" exitCode=0 Nov 21 13:46:34 crc kubenswrapper[4675]: I1121 13:46:34.671188 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" event={"ID":"4268fd55-2e1a-4ce3-b168-61ae292f22b9","Type":"ContainerDied","Data":"2a428780348f29797a3551814eca716921e45c8560b1ee7f24e1bb32dc82ce45"} Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.054731 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovn-acl-logging/0.log" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.055302 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovn-controller/0.log" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.055763 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097181 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovnkube-config\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097422 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95hrv\" (UniqueName: \"kubernetes.io/projected/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-kube-api-access-95hrv\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097460 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-slash\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097484 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-env-overrides\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097524 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-run-netns\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097547 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-node-log\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097562 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-slash" (OuterVolumeSpecName: "host-slash") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097567 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-kubelet\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097624 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovnkube-script-lib\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097649 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-systemd-units\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097640 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-node-log" (OuterVolumeSpecName: "node-log") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097653 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097706 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097708 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097685 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-var-lib-openvswitch\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097670 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097720 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097771 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-systemd\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097796 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-cni-netd\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097831 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-log-socket\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097869 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovn-node-metrics-cert\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097891 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-etc-openvswitch\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097911 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-run-ovn-kubernetes\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097907 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097938 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097965 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-openvswitch\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.097996 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-ovn\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098011 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098039 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-cni-bin\") pod \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\" (UID: \"5fd58cf4-de2e-4357-96eb-4fdb4694ea48\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098061 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098122 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098147 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098174 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-log-socket" (OuterVolumeSpecName: "log-socket") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098197 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098227 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098483 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098570 4675 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-log-socket\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098598 4675 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098610 4675 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098620 4675 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098630 4675 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098640 4675 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098651 4675 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098659 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098669 4675 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-slash\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098678 4675 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098687 4675 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-node-log\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098696 4675 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098705 4675 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098714 4675 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098725 4675 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.098700 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.105491 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-kube-api-access-95hrv" (OuterVolumeSpecName: "kube-api-access-95hrv") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "kube-api-access-95hrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.109432 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n7bcx"] Nov 21 13:46:35 crc kubenswrapper[4675]: E1121 13:46:35.110454 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.110478 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: E1121 13:46:35.110495 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="kubecfg-setup" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.110504 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="kubecfg-setup" Nov 21 13:46:35 crc kubenswrapper[4675]: E1121 13:46:35.110514 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.110499 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.110522 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: E1121 13:46:35.110592 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovn-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.110609 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovn-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: E1121 13:46:35.110627 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovn-acl-logging" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.110635 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovn-acl-logging" Nov 21 13:46:35 crc kubenswrapper[4675]: E1121 13:46:35.110646 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="kube-rbac-proxy-ovn-metrics" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.110654 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="kube-rbac-proxy-ovn-metrics" Nov 21 13:46:35 crc kubenswrapper[4675]: E1121 13:46:35.110668 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="northd" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.110679 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="northd" Nov 21 13:46:35 crc kubenswrapper[4675]: E1121 13:46:35.110689 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="sbdb" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.110697 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="sbdb" Nov 21 13:46:35 crc kubenswrapper[4675]: E1121 13:46:35.110714 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="nbdb" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.110724 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="nbdb" Nov 21 13:46:35 crc kubenswrapper[4675]: E1121 13:46:35.110733 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="kube-rbac-proxy-node" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.110742 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="kube-rbac-proxy-node" Nov 21 13:46:35 crc kubenswrapper[4675]: E1121 13:46:35.110756 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.110764 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.110993 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.111006 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.111019 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="nbdb" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.111029 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="sbdb" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.111038 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovn-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.111047 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovn-acl-logging" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.111060 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.111091 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="kube-rbac-proxy-ovn-metrics" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.111100 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="northd" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.111111 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="kube-rbac-proxy-node" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.111121 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.111131 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: E1121 13:46:35.111305 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.111318 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: E1121 13:46:35.111645 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.111655 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" containerName="ovnkube-controller" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.116562 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.117506 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5fd58cf4-de2e-4357-96eb-4fdb4694ea48" (UID: "5fd58cf4-de2e-4357-96eb-4fdb4694ea48"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.199488 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-run-systemd\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.199972 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-log-socket\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.200025 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-systemd-units\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.200055 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-kubelet\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.200092 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad07785e-3755-48da-bedc-b41e5db78a13-ovnkube-script-lib\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.200197 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-run-ovn\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.200229 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-cni-netd\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.200260 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad07785e-3755-48da-bedc-b41e5db78a13-ovn-node-metrics-cert\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.200288 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.200324 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmgsd\" (UniqueName: \"kubernetes.io/projected/ad07785e-3755-48da-bedc-b41e5db78a13-kube-api-access-fmgsd\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.200549 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-var-lib-openvswitch\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.200608 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-slash\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.200642 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-run-netns\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.200736 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-run-ovn-kubernetes\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.201209 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-etc-openvswitch\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.201288 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad07785e-3755-48da-bedc-b41e5db78a13-ovnkube-config\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.201417 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad07785e-3755-48da-bedc-b41e5db78a13-env-overrides\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.201533 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-cni-bin\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.201613 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-run-openvswitch\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.201644 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-node-log\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.201767 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.201789 4675 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.201804 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.201825 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95hrv\" (UniqueName: \"kubernetes.io/projected/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-kube-api-access-95hrv\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.201837 4675 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fd58cf4-de2e-4357-96eb-4fdb4694ea48-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303482 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-cni-bin\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303565 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-run-openvswitch\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303603 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-node-log\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303653 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-run-systemd\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303680 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-log-socket\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303733 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-cni-bin\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303737 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-systemd-units\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303790 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-systemd-units\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303841 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-node-log\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303850 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-kubelet\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303889 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad07785e-3755-48da-bedc-b41e5db78a13-ovnkube-script-lib\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303925 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-log-socket\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303951 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-run-ovn\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303963 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-run-systemd\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303997 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-cni-netd\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.304058 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad07785e-3755-48da-bedc-b41e5db78a13-ovn-node-metrics-cert\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.304120 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.304154 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmgsd\" (UniqueName: \"kubernetes.io/projected/ad07785e-3755-48da-bedc-b41e5db78a13-kube-api-access-fmgsd\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.304194 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-var-lib-openvswitch\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.304229 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-slash\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.304257 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-run-netns\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.304297 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-run-ovn-kubernetes\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.304341 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-etc-openvswitch\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.304376 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad07785e-3755-48da-bedc-b41e5db78a13-ovnkube-config\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.304412 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad07785e-3755-48da-bedc-b41e5db78a13-env-overrides\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.304950 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad07785e-3755-48da-bedc-b41e5db78a13-ovnkube-script-lib\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.305005 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-kubelet\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.305222 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad07785e-3755-48da-bedc-b41e5db78a13-env-overrides\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.305289 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-var-lib-openvswitch\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.305334 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-slash\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.303891 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-run-openvswitch\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.305375 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-run-netns\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.305420 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-run-ovn-kubernetes\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.305446 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-run-ovn\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.305462 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-etc-openvswitch\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.305488 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-cni-netd\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.306218 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad07785e-3755-48da-bedc-b41e5db78a13-ovnkube-config\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.306289 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad07785e-3755-48da-bedc-b41e5db78a13-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.309639 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad07785e-3755-48da-bedc-b41e5db78a13-ovn-node-metrics-cert\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.323573 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmgsd\" (UniqueName: \"kubernetes.io/projected/ad07785e-3755-48da-bedc-b41e5db78a13-kube-api-access-fmgsd\") pod \"ovnkube-node-n7bcx\" (UID: \"ad07785e-3755-48da-bedc-b41e5db78a13\") " pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.490801 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:46:35 crc kubenswrapper[4675]: W1121 13:46:35.510889 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad07785e_3755_48da_bedc_b41e5db78a13.slice/crio-a8927faf45e370cd0b40ff00b4205837a478b24c9d39810e6d6f18f51f324baa WatchSource:0}: Error finding container a8927faf45e370cd0b40ff00b4205837a478b24c9d39810e6d6f18f51f324baa: Status 404 returned error can't find the container with id a8927faf45e370cd0b40ff00b4205837a478b24c9d39810e6d6f18f51f324baa Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.680307 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t978g" event={"ID":"406d0828-fd52-4538-aa6e-f60e4d58f0c0","Type":"ContainerStarted","Data":"1621951802ab21851655e9bd3e26b78e026efcb06acbd54b34107709c7d01c30"} Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.683175 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" event={"ID":"ad07785e-3755-48da-bedc-b41e5db78a13","Type":"ContainerStarted","Data":"a8927faf45e370cd0b40ff00b4205837a478b24c9d39810e6d6f18f51f324baa"} Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.688367 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovn-acl-logging/0.log" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.689051 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28jn_5fd58cf4-de2e-4357-96eb-4fdb4694ea48/ovn-controller/0.log" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.689517 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" event={"ID":"5fd58cf4-de2e-4357-96eb-4fdb4694ea48","Type":"ContainerDied","Data":"f45a993638e679390172a556f88aeea02ede4cd2c4c11895cf8d01986096798e"} Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.689553 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w28jn" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.689557 4675 scope.go:117] "RemoveContainer" containerID="c5f661d8d67aa7543bd09fe3d0b66402ebe6bac1a49d1de091718eed7ff1ace7" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.710564 4675 scope.go:117] "RemoveContainer" containerID="5c734ac4bc54ade466827b7be466a08784aa726f0ccce62f28c90ceda9f13edc" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.728638 4675 scope.go:117] "RemoveContainer" containerID="73a609eac13500c9d5d4fee3d1f1e76e2bab05651a9b9a64cd68567fba8c09d4" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.733001 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t978g" podStartSLOduration=3.207320884 podStartE2EDuration="11.732981236s" podCreationTimestamp="2025-11-21 13:46:24 +0000 UTC" firstStartedPulling="2025-11-21 13:46:26.552618524 +0000 UTC m=+863.279033251" lastFinishedPulling="2025-11-21 13:46:35.078278865 +0000 UTC m=+871.804693603" observedRunningTime="2025-11-21 13:46:35.700543364 +0000 UTC m=+872.426958101" watchObservedRunningTime="2025-11-21 13:46:35.732981236 +0000 UTC m=+872.459395963" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.734167 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w28jn"] Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.737767 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w28jn"] Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.748308 4675 scope.go:117] "RemoveContainer" containerID="220907c9c143d8d377e80bf095d5f94cab611fa61cadc684ba6868893d70bdb7" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.758674 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.766768 4675 scope.go:117] "RemoveContainer" containerID="f656749385e2adadf0430e507e2a8e1399e01262ef613a0922498ce9cb29c1b3" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.787196 4675 scope.go:117] "RemoveContainer" containerID="8a304587d0a44ff6a72067f574bac387eacb67a4916ec56a950df14af7758a8e" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.804302 4675 scope.go:117] "RemoveContainer" containerID="4f60ba880ea8553e29a94fe12f7a3f9d7bfbb322980302a067f1d5d217cfd6db" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.809699 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4268fd55-2e1a-4ce3-b168-61ae292f22b9-util\") pod \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\" (UID: \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.809768 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4268fd55-2e1a-4ce3-b168-61ae292f22b9-bundle\") pod \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\" (UID: \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.809879 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpzmz\" (UniqueName: \"kubernetes.io/projected/4268fd55-2e1a-4ce3-b168-61ae292f22b9-kube-api-access-xpzmz\") pod \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\" (UID: \"4268fd55-2e1a-4ce3-b168-61ae292f22b9\") " Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.812217 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4268fd55-2e1a-4ce3-b168-61ae292f22b9-bundle" (OuterVolumeSpecName: "bundle") pod "4268fd55-2e1a-4ce3-b168-61ae292f22b9" (UID: "4268fd55-2e1a-4ce3-b168-61ae292f22b9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.815279 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4268fd55-2e1a-4ce3-b168-61ae292f22b9-kube-api-access-xpzmz" (OuterVolumeSpecName: "kube-api-access-xpzmz") pod "4268fd55-2e1a-4ce3-b168-61ae292f22b9" (UID: "4268fd55-2e1a-4ce3-b168-61ae292f22b9"). InnerVolumeSpecName "kube-api-access-xpzmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.820468 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4268fd55-2e1a-4ce3-b168-61ae292f22b9-util" (OuterVolumeSpecName: "util") pod "4268fd55-2e1a-4ce3-b168-61ae292f22b9" (UID: "4268fd55-2e1a-4ce3-b168-61ae292f22b9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.835879 4675 scope.go:117] "RemoveContainer" containerID="cae7c4294113b42e1922afcbf84cbe03bef8ad8963cac73ee09561f307d09e87" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.854156 4675 scope.go:117] "RemoveContainer" containerID="6b9e8be832420ec286008a2c702323f70792c241541da1acb4fce557d4f2c86c" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.911931 4675 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4268fd55-2e1a-4ce3-b168-61ae292f22b9-util\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.911973 4675 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4268fd55-2e1a-4ce3-b168-61ae292f22b9-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:35 crc kubenswrapper[4675]: I1121 13:46:35.911982 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpzmz\" (UniqueName: \"kubernetes.io/projected/4268fd55-2e1a-4ce3-b168-61ae292f22b9-kube-api-access-xpzmz\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:36 crc kubenswrapper[4675]: I1121 13:46:36.697733 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" event={"ID":"4268fd55-2e1a-4ce3-b168-61ae292f22b9","Type":"ContainerDied","Data":"09d6e94027b425aec55dc974ba1d6166c31c1bd24414df91552924148ad04117"} Nov 21 13:46:36 crc kubenswrapper[4675]: I1121 13:46:36.697784 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn" Nov 21 13:46:36 crc kubenswrapper[4675]: I1121 13:46:36.697776 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09d6e94027b425aec55dc974ba1d6166c31c1bd24414df91552924148ad04117" Nov 21 13:46:36 crc kubenswrapper[4675]: I1121 13:46:36.699316 4675 generic.go:334] "Generic (PLEG): container finished" podID="ad07785e-3755-48da-bedc-b41e5db78a13" containerID="e2ca8da60188379e4b64962651c7b33b2fb48d5332f230888353216a28a4e31c" exitCode=0 Nov 21 13:46:36 crc kubenswrapper[4675]: I1121 13:46:36.699383 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" event={"ID":"ad07785e-3755-48da-bedc-b41e5db78a13","Type":"ContainerDied","Data":"e2ca8da60188379e4b64962651c7b33b2fb48d5332f230888353216a28a4e31c"} Nov 21 13:46:36 crc kubenswrapper[4675]: I1121 13:46:36.858226 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd58cf4-de2e-4357-96eb-4fdb4694ea48" path="/var/lib/kubelet/pods/5fd58cf4-de2e-4357-96eb-4fdb4694ea48/volumes" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.816592 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mlwqg"] Nov 21 13:46:39 crc kubenswrapper[4675]: E1121 13:46:39.817362 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4268fd55-2e1a-4ce3-b168-61ae292f22b9" containerName="util" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.817378 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4268fd55-2e1a-4ce3-b168-61ae292f22b9" containerName="util" Nov 21 13:46:39 crc kubenswrapper[4675]: E1121 13:46:39.817395 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4268fd55-2e1a-4ce3-b168-61ae292f22b9" containerName="extract" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.817404 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4268fd55-2e1a-4ce3-b168-61ae292f22b9" containerName="extract" Nov 21 13:46:39 crc kubenswrapper[4675]: E1121 13:46:39.817427 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4268fd55-2e1a-4ce3-b168-61ae292f22b9" containerName="pull" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.817437 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4268fd55-2e1a-4ce3-b168-61ae292f22b9" containerName="pull" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.817586 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4268fd55-2e1a-4ce3-b168-61ae292f22b9" containerName="extract" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.818650 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.872772 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-catalog-content\") pod \"community-operators-mlwqg\" (UID: \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\") " pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.872837 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7tt5\" (UniqueName: \"kubernetes.io/projected/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-kube-api-access-t7tt5\") pod \"community-operators-mlwqg\" (UID: \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\") " pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.872975 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-utilities\") pod \"community-operators-mlwqg\" (UID: \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\") " pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.974383 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-utilities\") pod \"community-operators-mlwqg\" (UID: \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\") " pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.974475 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-catalog-content\") pod \"community-operators-mlwqg\" (UID: \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\") " pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.974513 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7tt5\" (UniqueName: \"kubernetes.io/projected/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-kube-api-access-t7tt5\") pod \"community-operators-mlwqg\" (UID: \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\") " pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.974983 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-catalog-content\") pod \"community-operators-mlwqg\" (UID: \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\") " pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.975263 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-utilities\") pod \"community-operators-mlwqg\" (UID: \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\") " pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:39 crc kubenswrapper[4675]: I1121 13:46:39.992760 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7tt5\" (UniqueName: \"kubernetes.io/projected/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-kube-api-access-t7tt5\") pod \"community-operators-mlwqg\" (UID: \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\") " pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:40 crc kubenswrapper[4675]: I1121 13:46:40.137475 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:40 crc kubenswrapper[4675]: E1121 13:46:40.727599 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-mlwqg_openshift-marketplace_db6aaca7-9dda-4eb0-a877-f31bc34bfd8b_0(1a3f17c3839f74093b584e8303c35961de91f033373770c5ec69b33ba20afa6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:46:40 crc kubenswrapper[4675]: E1121 13:46:40.727667 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-mlwqg_openshift-marketplace_db6aaca7-9dda-4eb0-a877-f31bc34bfd8b_0(1a3f17c3839f74093b584e8303c35961de91f033373770c5ec69b33ba20afa6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:40 crc kubenswrapper[4675]: E1121 13:46:40.727691 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-mlwqg_openshift-marketplace_db6aaca7-9dda-4eb0-a877-f31bc34bfd8b_0(1a3f17c3839f74093b584e8303c35961de91f033373770c5ec69b33ba20afa6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:40 crc kubenswrapper[4675]: E1121 13:46:40.727744 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"community-operators-mlwqg_openshift-marketplace(db6aaca7-9dda-4eb0-a877-f31bc34bfd8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"community-operators-mlwqg_openshift-marketplace(db6aaca7-9dda-4eb0-a877-f31bc34bfd8b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-mlwqg_openshift-marketplace_db6aaca7-9dda-4eb0-a877-f31bc34bfd8b_0(1a3f17c3839f74093b584e8303c35961de91f033373770c5ec69b33ba20afa6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/community-operators-mlwqg" podUID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:44.757930 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" event={"ID":"ad07785e-3755-48da-bedc-b41e5db78a13","Type":"ContainerStarted","Data":"4a0c5d79227ca38df9c8cedac16819751317020af075694abe577e3fd8d314eb"} Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:45.003249 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:45.003296 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:45.086531 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:45.763627 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwc65" event={"ID":"d6ebe08f-b2ee-4612-bc20-39e31a66631e","Type":"ContainerStarted","Data":"ee9d89a6d4cefc9c90ee48cb0f06c1af1c875b9eaba76e4a406abfa09220daa1"} Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:45.832692 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:46.826460 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rwc65" podStartSLOduration=8.667932858 podStartE2EDuration="22.82644083s" podCreationTimestamp="2025-11-21 13:46:24 +0000 UTC" firstStartedPulling="2025-11-21 13:46:26.544698291 +0000 UTC m=+863.271113018" lastFinishedPulling="2025-11-21 13:46:40.703206263 +0000 UTC m=+877.429620990" observedRunningTime="2025-11-21 13:46:46.825787734 +0000 UTC m=+883.552202461" watchObservedRunningTime="2025-11-21 13:46:46.82644083 +0000 UTC m=+883.552855557" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:46.993953 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t978g"] Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:47.773733 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t978g" podUID="406d0828-fd52-4538-aa6e-f60e4d58f0c0" containerName="registry-server" containerID="cri-o://1621951802ab21851655e9bd3e26b78e026efcb06acbd54b34107709c7d01c30" gracePeriod=2 Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:48.783885 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" event={"ID":"ad07785e-3755-48da-bedc-b41e5db78a13","Type":"ContainerStarted","Data":"d7534e21e1106a567dc189729b6f9daed7a7ee8681e154953fbf5f6bab3459a6"} Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:48.874786 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844"] Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:48.875706 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:48.882338 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-nl759" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:48.882633 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:48.885084 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:48.996523 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22"] Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:48.997299 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.000198 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-z94f5" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.000202 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.001763 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtw27\" (UniqueName: \"kubernetes.io/projected/ce64e510-eca3-48f5-858d-165c3d3cfba7-kube-api-access-xtw27\") pod \"obo-prometheus-operator-668cf9dfbb-mx844\" (UID: \"ce64e510-eca3-48f5-858d-165c3d3cfba7\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.003669 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf"] Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.004380 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.099002 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-cb77k"] Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.099712 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.102130 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-zhrkv" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.102386 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.102712 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a50974d-f334-4845-b892-5e4b97fc3d79-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22\" (UID: \"1a50974d-f334-4845-b892-5e4b97fc3d79\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.102855 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/320effcd-ec3e-4741-b3d9-e0ec17502e50-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf\" (UID: \"320effcd-ec3e-4741-b3d9-e0ec17502e50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.102999 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a50974d-f334-4845-b892-5e4b97fc3d79-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22\" (UID: \"1a50974d-f334-4845-b892-5e4b97fc3d79\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.103043 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtw27\" (UniqueName: \"kubernetes.io/projected/ce64e510-eca3-48f5-858d-165c3d3cfba7-kube-api-access-xtw27\") pod \"obo-prometheus-operator-668cf9dfbb-mx844\" (UID: \"ce64e510-eca3-48f5-858d-165c3d3cfba7\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.103191 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/320effcd-ec3e-4741-b3d9-e0ec17502e50-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf\" (UID: \"320effcd-ec3e-4741-b3d9-e0ec17502e50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.151168 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtw27\" (UniqueName: \"kubernetes.io/projected/ce64e510-eca3-48f5-858d-165c3d3cfba7-kube-api-access-xtw27\") pod \"obo-prometheus-operator-668cf9dfbb-mx844\" (UID: \"ce64e510-eca3-48f5-858d-165c3d3cfba7\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.195093 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.204765 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/320effcd-ec3e-4741-b3d9-e0ec17502e50-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf\" (UID: \"320effcd-ec3e-4741-b3d9-e0ec17502e50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.204819 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a50974d-f334-4845-b892-5e4b97fc3d79-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22\" (UID: \"1a50974d-f334-4845-b892-5e4b97fc3d79\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.204874 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ps9v\" (UniqueName: \"kubernetes.io/projected/57e668d4-e3df-4b36-ad58-51e5b7f2d16e-kube-api-access-2ps9v\") pod \"observability-operator-d8bb48f5d-cb77k\" (UID: \"57e668d4-e3df-4b36-ad58-51e5b7f2d16e\") " pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.204910 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/320effcd-ec3e-4741-b3d9-e0ec17502e50-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf\" (UID: \"320effcd-ec3e-4741-b3d9-e0ec17502e50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.204948 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/57e668d4-e3df-4b36-ad58-51e5b7f2d16e-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-cb77k\" (UID: \"57e668d4-e3df-4b36-ad58-51e5b7f2d16e\") " pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.204996 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a50974d-f334-4845-b892-5e4b97fc3d79-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22\" (UID: \"1a50974d-f334-4845-b892-5e4b97fc3d79\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.210717 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a50974d-f334-4845-b892-5e4b97fc3d79-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22\" (UID: \"1a50974d-f334-4845-b892-5e4b97fc3d79\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.218589 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a50974d-f334-4845-b892-5e4b97fc3d79-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22\" (UID: \"1a50974d-f334-4845-b892-5e4b97fc3d79\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.219597 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/320effcd-ec3e-4741-b3d9-e0ec17502e50-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf\" (UID: \"320effcd-ec3e-4741-b3d9-e0ec17502e50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.229514 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/320effcd-ec3e-4741-b3d9-e0ec17502e50-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf\" (UID: \"320effcd-ec3e-4741-b3d9-e0ec17502e50\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.234187 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators_ce64e510-eca3-48f5-858d-165c3d3cfba7_0(d5b7f4f03a3284fb7aec5bbbbb47b5022a38874377a82ca9c0423c56918d459a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.234231 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators_ce64e510-eca3-48f5-858d-165c3d3cfba7_0(d5b7f4f03a3284fb7aec5bbbbb47b5022a38874377a82ca9c0423c56918d459a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.234249 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators_ce64e510-eca3-48f5-858d-165c3d3cfba7_0(d5b7f4f03a3284fb7aec5bbbbb47b5022a38874377a82ca9c0423c56918d459a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.234287 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators(ce64e510-eca3-48f5-858d-165c3d3cfba7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators(ce64e510-eca3-48f5-858d-165c3d3cfba7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators_ce64e510-eca3-48f5-858d-165c3d3cfba7_0(d5b7f4f03a3284fb7aec5bbbbb47b5022a38874377a82ca9c0423c56918d459a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" podUID="ce64e510-eca3-48f5-858d-165c3d3cfba7" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.305964 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ps9v\" (UniqueName: \"kubernetes.io/projected/57e668d4-e3df-4b36-ad58-51e5b7f2d16e-kube-api-access-2ps9v\") pod \"observability-operator-d8bb48f5d-cb77k\" (UID: \"57e668d4-e3df-4b36-ad58-51e5b7f2d16e\") " pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.306121 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/57e668d4-e3df-4b36-ad58-51e5b7f2d16e-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-cb77k\" (UID: \"57e668d4-e3df-4b36-ad58-51e5b7f2d16e\") " pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.312549 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/57e668d4-e3df-4b36-ad58-51e5b7f2d16e-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-cb77k\" (UID: \"57e668d4-e3df-4b36-ad58-51e5b7f2d16e\") " pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.313977 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.324920 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-67wfr"] Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.325700 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.329348 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-qtt72" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.331212 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.347724 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ps9v\" (UniqueName: \"kubernetes.io/projected/57e668d4-e3df-4b36-ad58-51e5b7f2d16e-kube-api-access-2ps9v\") pod \"observability-operator-d8bb48f5d-cb77k\" (UID: \"57e668d4-e3df-4b36-ad58-51e5b7f2d16e\") " pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.358250 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators_1a50974d-f334-4845-b892-5e4b97fc3d79_0(3dff51a4663f5c485648bf5c1d508df8b898c2ca453c4e08e628b0fbd5cb4ddd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.358316 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators_1a50974d-f334-4845-b892-5e4b97fc3d79_0(3dff51a4663f5c485648bf5c1d508df8b898c2ca453c4e08e628b0fbd5cb4ddd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.358337 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators_1a50974d-f334-4845-b892-5e4b97fc3d79_0(3dff51a4663f5c485648bf5c1d508df8b898c2ca453c4e08e628b0fbd5cb4ddd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.358389 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators(1a50974d-f334-4845-b892-5e4b97fc3d79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators(1a50974d-f334-4845-b892-5e4b97fc3d79)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators_1a50974d-f334-4845-b892-5e4b97fc3d79_0(3dff51a4663f5c485648bf5c1d508df8b898c2ca453c4e08e628b0fbd5cb4ddd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" podUID="1a50974d-f334-4845-b892-5e4b97fc3d79" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.366490 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators_320effcd-ec3e-4741-b3d9-e0ec17502e50_0(6b7475969801c93765e20475c899b843473614a08f8a36d616f1edaf05a17040): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.366548 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators_320effcd-ec3e-4741-b3d9-e0ec17502e50_0(6b7475969801c93765e20475c899b843473614a08f8a36d616f1edaf05a17040): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.366570 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators_320effcd-ec3e-4741-b3d9-e0ec17502e50_0(6b7475969801c93765e20475c899b843473614a08f8a36d616f1edaf05a17040): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.366632 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators(320effcd-ec3e-4741-b3d9-e0ec17502e50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators(320effcd-ec3e-4741-b3d9-e0ec17502e50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators_320effcd-ec3e-4741-b3d9-e0ec17502e50_0(6b7475969801c93765e20475c899b843473614a08f8a36d616f1edaf05a17040): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" podUID="320effcd-ec3e-4741-b3d9-e0ec17502e50" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.407180 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhlwn\" (UniqueName: \"kubernetes.io/projected/01c95951-b168-42ed-aab7-9ffe813b6d55-kube-api-access-lhlwn\") pod \"perses-operator-5446b9c989-67wfr\" (UID: \"01c95951-b168-42ed-aab7-9ffe813b6d55\") " pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.407403 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/01c95951-b168-42ed-aab7-9ffe813b6d55-openshift-service-ca\") pod \"perses-operator-5446b9c989-67wfr\" (UID: \"01c95951-b168-42ed-aab7-9ffe813b6d55\") " pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.418234 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.446778 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-cb77k_openshift-operators_57e668d4-e3df-4b36-ad58-51e5b7f2d16e_0(d0e44aa8da2ebfb22c22015a0da13848a9507c53f928dfed895cf5c419e1b530): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.446847 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-cb77k_openshift-operators_57e668d4-e3df-4b36-ad58-51e5b7f2d16e_0(d0e44aa8da2ebfb22c22015a0da13848a9507c53f928dfed895cf5c419e1b530): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.446878 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-cb77k_openshift-operators_57e668d4-e3df-4b36-ad58-51e5b7f2d16e_0(d0e44aa8da2ebfb22c22015a0da13848a9507c53f928dfed895cf5c419e1b530): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.446933 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-cb77k_openshift-operators(57e668d4-e3df-4b36-ad58-51e5b7f2d16e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-cb77k_openshift-operators(57e668d4-e3df-4b36-ad58-51e5b7f2d16e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-cb77k_openshift-operators_57e668d4-e3df-4b36-ad58-51e5b7f2d16e_0(d0e44aa8da2ebfb22c22015a0da13848a9507c53f928dfed895cf5c419e1b530): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" podUID="57e668d4-e3df-4b36-ad58-51e5b7f2d16e" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.508625 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/01c95951-b168-42ed-aab7-9ffe813b6d55-openshift-service-ca\") pod \"perses-operator-5446b9c989-67wfr\" (UID: \"01c95951-b168-42ed-aab7-9ffe813b6d55\") " pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.509035 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhlwn\" (UniqueName: \"kubernetes.io/projected/01c95951-b168-42ed-aab7-9ffe813b6d55-kube-api-access-lhlwn\") pod \"perses-operator-5446b9c989-67wfr\" (UID: \"01c95951-b168-42ed-aab7-9ffe813b6d55\") " pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.510416 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/01c95951-b168-42ed-aab7-9ffe813b6d55-openshift-service-ca\") pod \"perses-operator-5446b9c989-67wfr\" (UID: \"01c95951-b168-42ed-aab7-9ffe813b6d55\") " pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.553012 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhlwn\" (UniqueName: \"kubernetes.io/projected/01c95951-b168-42ed-aab7-9ffe813b6d55-kube-api-access-lhlwn\") pod \"perses-operator-5446b9c989-67wfr\" (UID: \"01c95951-b168-42ed-aab7-9ffe813b6d55\") " pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.728661 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.758609 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-67wfr_openshift-operators_01c95951-b168-42ed-aab7-9ffe813b6d55_0(57762e5a5794fbb94e12f52daf2936dac98ef87e76ef7e53f16e4829f58878b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.758697 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-67wfr_openshift-operators_01c95951-b168-42ed-aab7-9ffe813b6d55_0(57762e5a5794fbb94e12f52daf2936dac98ef87e76ef7e53f16e4829f58878b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.758739 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-67wfr_openshift-operators_01c95951-b168-42ed-aab7-9ffe813b6d55_0(57762e5a5794fbb94e12f52daf2936dac98ef87e76ef7e53f16e4829f58878b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:49.758802 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-67wfr_openshift-operators(01c95951-b168-42ed-aab7-9ffe813b6d55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-67wfr_openshift-operators(01c95951-b168-42ed-aab7-9ffe813b6d55)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-67wfr_openshift-operators_01c95951-b168-42ed-aab7-9ffe813b6d55_0(57762e5a5794fbb94e12f52daf2936dac98ef87e76ef7e53f16e4829f58878b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-67wfr" podUID="01c95951-b168-42ed-aab7-9ffe813b6d55" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.791514 4675 generic.go:334] "Generic (PLEG): container finished" podID="406d0828-fd52-4538-aa6e-f60e4d58f0c0" containerID="1621951802ab21851655e9bd3e26b78e026efcb06acbd54b34107709c7d01c30" exitCode=0 Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:49.791548 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t978g" event={"ID":"406d0828-fd52-4538-aa6e-f60e4d58f0c0","Type":"ContainerDied","Data":"1621951802ab21851655e9bd3e26b78e026efcb06acbd54b34107709c7d01c30"} Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:52.807728 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" event={"ID":"ad07785e-3755-48da-bedc-b41e5db78a13","Type":"ContainerStarted","Data":"af4e53d60ec6cfe46f8d86f93a3716cba3717117ddacb259b2b4e57c276ba043"} Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:52.848120 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:52.848695 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:52.881361 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-mlwqg_openshift-marketplace_db6aaca7-9dda-4eb0-a877-f31bc34bfd8b_0(251d31795c4bdb08329a4b3f64a5785b8610f4d8b36b5994f3afc7c761e7b43d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:52.881435 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-mlwqg_openshift-marketplace_db6aaca7-9dda-4eb0-a877-f31bc34bfd8b_0(251d31795c4bdb08329a4b3f64a5785b8610f4d8b36b5994f3afc7c761e7b43d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:52.881460 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-mlwqg_openshift-marketplace_db6aaca7-9dda-4eb0-a877-f31bc34bfd8b_0(251d31795c4bdb08329a4b3f64a5785b8610f4d8b36b5994f3afc7c761e7b43d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:52.881508 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"community-operators-mlwqg_openshift-marketplace(db6aaca7-9dda-4eb0-a877-f31bc34bfd8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"community-operators-mlwqg_openshift-marketplace(db6aaca7-9dda-4eb0-a877-f31bc34bfd8b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-mlwqg_openshift-marketplace_db6aaca7-9dda-4eb0-a877-f31bc34bfd8b_0(251d31795c4bdb08329a4b3f64a5785b8610f4d8b36b5994f3afc7c761e7b43d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/community-operators-mlwqg" podUID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:54.795858 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:54.796115 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:54.857434 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:54.893917 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:54.918589 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:54.987673 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d0828-fd52-4538-aa6e-f60e4d58f0c0-utilities\") pod \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\" (UID: \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\") " Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:54.987747 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d0828-fd52-4538-aa6e-f60e4d58f0c0-catalog-content\") pod \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\" (UID: \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\") " Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:54.987789 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7rhl\" (UniqueName: \"kubernetes.io/projected/406d0828-fd52-4538-aa6e-f60e4d58f0c0-kube-api-access-h7rhl\") pod \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\" (UID: \"406d0828-fd52-4538-aa6e-f60e4d58f0c0\") " Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:54.988554 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406d0828-fd52-4538-aa6e-f60e4d58f0c0-utilities" (OuterVolumeSpecName: "utilities") pod "406d0828-fd52-4538-aa6e-f60e4d58f0c0" (UID: "406d0828-fd52-4538-aa6e-f60e4d58f0c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:55.001206 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406d0828-fd52-4538-aa6e-f60e4d58f0c0-kube-api-access-h7rhl" (OuterVolumeSpecName: "kube-api-access-h7rhl") pod "406d0828-fd52-4538-aa6e-f60e4d58f0c0" (UID: "406d0828-fd52-4538-aa6e-f60e4d58f0c0"). InnerVolumeSpecName "kube-api-access-h7rhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:55.089321 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d0828-fd52-4538-aa6e-f60e4d58f0c0-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:55.089346 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7rhl\" (UniqueName: \"kubernetes.io/projected/406d0828-fd52-4538-aa6e-f60e4d58f0c0-kube-api-access-h7rhl\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:55.133739 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406d0828-fd52-4538-aa6e-f60e4d58f0c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "406d0828-fd52-4538-aa6e-f60e4d58f0c0" (UID: "406d0828-fd52-4538-aa6e-f60e4d58f0c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:55.190769 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d0828-fd52-4538-aa6e-f60e4d58f0c0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:55.825046 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t978g" event={"ID":"406d0828-fd52-4538-aa6e-f60e4d58f0c0","Type":"ContainerDied","Data":"297d068bee7cf5fb8f6c03596fd9f0f32e9d68669d8712827fd78e47a9116781"} Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:55.825103 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t978g" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:55.825114 4675 scope.go:117] "RemoveContainer" containerID="1621951802ab21851655e9bd3e26b78e026efcb06acbd54b34107709c7d01c30" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:55.838812 4675 scope.go:117] "RemoveContainer" containerID="8f8b4d31744a91964583d3e6fe960dc2b7c68a5696763e3105d56cb80b53f0fc" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:55.851477 4675 scope.go:117] "RemoveContainer" containerID="bb819a355f379d770f1af38a4b89444f86e0cdbb7b4bae0f4e7ba84f934bf7f3" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:55.871009 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t978g"] Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:55.892435 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t978g"] Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:56.671931 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwc65"] Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:56.830649 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rwc65" podUID="d6ebe08f-b2ee-4612-bc20-39e31a66631e" containerName="registry-server" containerID="cri-o://ee9d89a6d4cefc9c90ee48cb0f06c1af1c875b9eaba76e4a406abfa09220daa1" gracePeriod=2 Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:56.855117 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406d0828-fd52-4538-aa6e-f60e4d58f0c0" path="/var/lib/kubelet/pods/406d0828-fd52-4538-aa6e-f60e4d58f0c0/volumes" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.828929 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.855146 4675 generic.go:334] "Generic (PLEG): container finished" podID="d6ebe08f-b2ee-4612-bc20-39e31a66631e" containerID="ee9d89a6d4cefc9c90ee48cb0f06c1af1c875b9eaba76e4a406abfa09220daa1" exitCode=0 Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.855195 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwc65" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.855214 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwc65" event={"ID":"d6ebe08f-b2ee-4612-bc20-39e31a66631e","Type":"ContainerDied","Data":"ee9d89a6d4cefc9c90ee48cb0f06c1af1c875b9eaba76e4a406abfa09220daa1"} Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.855242 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwc65" event={"ID":"d6ebe08f-b2ee-4612-bc20-39e31a66631e","Type":"ContainerDied","Data":"0c9185da6e67f7244ca5d942d919dbbffbf787cd8cf0f3c48ccf7542f3ab996f"} Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.855265 4675 scope.go:117] "RemoveContainer" containerID="ee9d89a6d4cefc9c90ee48cb0f06c1af1c875b9eaba76e4a406abfa09220daa1" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.862404 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" event={"ID":"ad07785e-3755-48da-bedc-b41e5db78a13","Type":"ContainerStarted","Data":"7d6a27eb2765c9e9b464e6769d5ce63de2674116bcf4ee2205361210740df837"} Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.862442 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" event={"ID":"ad07785e-3755-48da-bedc-b41e5db78a13","Type":"ContainerStarted","Data":"6480807b877bfdb16f3bbdee35ba52e1c3f4eb281a0e8e6ede0a8876bd5520cc"} Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.862455 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" event={"ID":"ad07785e-3755-48da-bedc-b41e5db78a13","Type":"ContainerStarted","Data":"a63279073617492974b0171c35e2e16ee37b836e3a4aeb98d674e7473bc8cf41"} Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.868871 4675 scope.go:117] "RemoveContainer" containerID="9633217f882ce98208c6f9c51c7292a1f797dc663f626a949f3c3f73a59fd5fd" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.874540 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shzxt\" (UniqueName: \"kubernetes.io/projected/d6ebe08f-b2ee-4612-bc20-39e31a66631e-kube-api-access-shzxt\") pod \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\" (UID: \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\") " Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.874628 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ebe08f-b2ee-4612-bc20-39e31a66631e-catalog-content\") pod \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\" (UID: \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\") " Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.874676 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ebe08f-b2ee-4612-bc20-39e31a66631e-utilities\") pod \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\" (UID: \"d6ebe08f-b2ee-4612-bc20-39e31a66631e\") " Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.876893 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ebe08f-b2ee-4612-bc20-39e31a66631e-utilities" (OuterVolumeSpecName: "utilities") pod "d6ebe08f-b2ee-4612-bc20-39e31a66631e" (UID: "d6ebe08f-b2ee-4612-bc20-39e31a66631e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.885286 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ebe08f-b2ee-4612-bc20-39e31a66631e-kube-api-access-shzxt" (OuterVolumeSpecName: "kube-api-access-shzxt") pod "d6ebe08f-b2ee-4612-bc20-39e31a66631e" (UID: "d6ebe08f-b2ee-4612-bc20-39e31a66631e"). InnerVolumeSpecName "kube-api-access-shzxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.895369 4675 scope.go:117] "RemoveContainer" containerID="dc007c383010625dbbdb29898ba0038a30bd9d273d106e904cb31e9f51dc8cc8" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.899697 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ebe08f-b2ee-4612-bc20-39e31a66631e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6ebe08f-b2ee-4612-bc20-39e31a66631e" (UID: "d6ebe08f-b2ee-4612-bc20-39e31a66631e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.915310 4675 scope.go:117] "RemoveContainer" containerID="ee9d89a6d4cefc9c90ee48cb0f06c1af1c875b9eaba76e4a406abfa09220daa1" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:59.915860 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9d89a6d4cefc9c90ee48cb0f06c1af1c875b9eaba76e4a406abfa09220daa1\": container with ID starting with ee9d89a6d4cefc9c90ee48cb0f06c1af1c875b9eaba76e4a406abfa09220daa1 not found: ID does not exist" containerID="ee9d89a6d4cefc9c90ee48cb0f06c1af1c875b9eaba76e4a406abfa09220daa1" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.915918 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9d89a6d4cefc9c90ee48cb0f06c1af1c875b9eaba76e4a406abfa09220daa1"} err="failed to get container status \"ee9d89a6d4cefc9c90ee48cb0f06c1af1c875b9eaba76e4a406abfa09220daa1\": rpc error: code = NotFound desc = could not find container \"ee9d89a6d4cefc9c90ee48cb0f06c1af1c875b9eaba76e4a406abfa09220daa1\": container with ID starting with ee9d89a6d4cefc9c90ee48cb0f06c1af1c875b9eaba76e4a406abfa09220daa1 not found: ID does not exist" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.916253 4675 scope.go:117] "RemoveContainer" containerID="9633217f882ce98208c6f9c51c7292a1f797dc663f626a949f3c3f73a59fd5fd" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:59.916567 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9633217f882ce98208c6f9c51c7292a1f797dc663f626a949f3c3f73a59fd5fd\": container with ID starting with 9633217f882ce98208c6f9c51c7292a1f797dc663f626a949f3c3f73a59fd5fd not found: ID does not exist" containerID="9633217f882ce98208c6f9c51c7292a1f797dc663f626a949f3c3f73a59fd5fd" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.916593 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9633217f882ce98208c6f9c51c7292a1f797dc663f626a949f3c3f73a59fd5fd"} err="failed to get container status \"9633217f882ce98208c6f9c51c7292a1f797dc663f626a949f3c3f73a59fd5fd\": rpc error: code = NotFound desc = could not find container \"9633217f882ce98208c6f9c51c7292a1f797dc663f626a949f3c3f73a59fd5fd\": container with ID starting with 9633217f882ce98208c6f9c51c7292a1f797dc663f626a949f3c3f73a59fd5fd not found: ID does not exist" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.916610 4675 scope.go:117] "RemoveContainer" containerID="dc007c383010625dbbdb29898ba0038a30bd9d273d106e904cb31e9f51dc8cc8" Nov 21 13:46:59 crc kubenswrapper[4675]: E1121 13:46:59.916840 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc007c383010625dbbdb29898ba0038a30bd9d273d106e904cb31e9f51dc8cc8\": container with ID starting with dc007c383010625dbbdb29898ba0038a30bd9d273d106e904cb31e9f51dc8cc8 not found: ID does not exist" containerID="dc007c383010625dbbdb29898ba0038a30bd9d273d106e904cb31e9f51dc8cc8" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.916870 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc007c383010625dbbdb29898ba0038a30bd9d273d106e904cb31e9f51dc8cc8"} err="failed to get container status \"dc007c383010625dbbdb29898ba0038a30bd9d273d106e904cb31e9f51dc8cc8\": rpc error: code = NotFound desc = could not find container \"dc007c383010625dbbdb29898ba0038a30bd9d273d106e904cb31e9f51dc8cc8\": container with ID starting with dc007c383010625dbbdb29898ba0038a30bd9d273d106e904cb31e9f51dc8cc8 not found: ID does not exist" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.976712 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shzxt\" (UniqueName: \"kubernetes.io/projected/d6ebe08f-b2ee-4612-bc20-39e31a66631e-kube-api-access-shzxt\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.976765 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ebe08f-b2ee-4612-bc20-39e31a66631e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:46:59 crc kubenswrapper[4675]: I1121 13:46:59.976779 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ebe08f-b2ee-4612-bc20-39e31a66631e-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:00 crc kubenswrapper[4675]: I1121 13:47:00.179299 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwc65"] Nov 21 13:47:00 crc kubenswrapper[4675]: I1121 13:47:00.184869 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwc65"] Nov 21 13:47:00 crc kubenswrapper[4675]: I1121 13:47:00.848532 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:47:00 crc kubenswrapper[4675]: I1121 13:47:00.849455 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:47:00 crc kubenswrapper[4675]: I1121 13:47:00.860745 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6ebe08f-b2ee-4612-bc20-39e31a66631e" path="/var/lib/kubelet/pods/d6ebe08f-b2ee-4612-bc20-39e31a66631e/volumes" Nov 21 13:47:00 crc kubenswrapper[4675]: E1121 13:47:00.869955 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-67wfr_openshift-operators_01c95951-b168-42ed-aab7-9ffe813b6d55_0(b5f7e06c8f21319009b65831d60597f45670f9c7b96623e5fecaa7b0d521d75e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:47:00 crc kubenswrapper[4675]: E1121 13:47:00.869998 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-67wfr_openshift-operators_01c95951-b168-42ed-aab7-9ffe813b6d55_0(b5f7e06c8f21319009b65831d60597f45670f9c7b96623e5fecaa7b0d521d75e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:47:00 crc kubenswrapper[4675]: E1121 13:47:00.870019 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-67wfr_openshift-operators_01c95951-b168-42ed-aab7-9ffe813b6d55_0(b5f7e06c8f21319009b65831d60597f45670f9c7b96623e5fecaa7b0d521d75e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:47:00 crc kubenswrapper[4675]: E1121 13:47:00.870086 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-67wfr_openshift-operators(01c95951-b168-42ed-aab7-9ffe813b6d55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-67wfr_openshift-operators(01c95951-b168-42ed-aab7-9ffe813b6d55)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-67wfr_openshift-operators_01c95951-b168-42ed-aab7-9ffe813b6d55_0(b5f7e06c8f21319009b65831d60597f45670f9c7b96623e5fecaa7b0d521d75e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-67wfr" podUID="01c95951-b168-42ed-aab7-9ffe813b6d55" Nov 21 13:47:01 crc kubenswrapper[4675]: I1121 13:47:01.850367 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:47:01 crc kubenswrapper[4675]: I1121 13:47:01.851680 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:47:01 crc kubenswrapper[4675]: E1121 13:47:01.894932 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators_ce64e510-eca3-48f5-858d-165c3d3cfba7_0(83d5efe3bf0f6bf5df48d799dad74935604421473b194a081ee37365292112f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:47:01 crc kubenswrapper[4675]: E1121 13:47:01.895153 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators_ce64e510-eca3-48f5-858d-165c3d3cfba7_0(83d5efe3bf0f6bf5df48d799dad74935604421473b194a081ee37365292112f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:47:01 crc kubenswrapper[4675]: E1121 13:47:01.895266 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators_ce64e510-eca3-48f5-858d-165c3d3cfba7_0(83d5efe3bf0f6bf5df48d799dad74935604421473b194a081ee37365292112f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:47:01 crc kubenswrapper[4675]: E1121 13:47:01.895385 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators(ce64e510-eca3-48f5-858d-165c3d3cfba7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators(ce64e510-eca3-48f5-858d-165c3d3cfba7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators_ce64e510-eca3-48f5-858d-165c3d3cfba7_0(83d5efe3bf0f6bf5df48d799dad74935604421473b194a081ee37365292112f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" podUID="ce64e510-eca3-48f5-858d-165c3d3cfba7" Nov 21 13:47:02 crc kubenswrapper[4675]: I1121 13:47:02.891497 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" event={"ID":"ad07785e-3755-48da-bedc-b41e5db78a13","Type":"ContainerStarted","Data":"6bc65c0a0b0d38edd16890471fdd57c065bb6944a2ca56277586044592f38943"} Nov 21 13:47:03 crc kubenswrapper[4675]: I1121 13:47:03.848520 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:47:03 crc kubenswrapper[4675]: I1121 13:47:03.848822 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:47:03 crc kubenswrapper[4675]: I1121 13:47:03.848878 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:47:03 crc kubenswrapper[4675]: I1121 13:47:03.849338 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:47:03 crc kubenswrapper[4675]: I1121 13:47:03.849455 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:47:03 crc kubenswrapper[4675]: I1121 13:47:03.849340 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:47:03 crc kubenswrapper[4675]: E1121 13:47:03.897432 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-cb77k_openshift-operators_57e668d4-e3df-4b36-ad58-51e5b7f2d16e_0(8d2ca2bd4d563ccaeafcd1ec1d4503cc178da82c206aaf130c1f87c368e525a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:47:03 crc kubenswrapper[4675]: E1121 13:47:03.897802 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-cb77k_openshift-operators_57e668d4-e3df-4b36-ad58-51e5b7f2d16e_0(8d2ca2bd4d563ccaeafcd1ec1d4503cc178da82c206aaf130c1f87c368e525a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:47:03 crc kubenswrapper[4675]: E1121 13:47:03.897832 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-cb77k_openshift-operators_57e668d4-e3df-4b36-ad58-51e5b7f2d16e_0(8d2ca2bd4d563ccaeafcd1ec1d4503cc178da82c206aaf130c1f87c368e525a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:47:03 crc kubenswrapper[4675]: E1121 13:47:03.897890 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-cb77k_openshift-operators(57e668d4-e3df-4b36-ad58-51e5b7f2d16e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-cb77k_openshift-operators(57e668d4-e3df-4b36-ad58-51e5b7f2d16e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-cb77k_openshift-operators_57e668d4-e3df-4b36-ad58-51e5b7f2d16e_0(8d2ca2bd4d563ccaeafcd1ec1d4503cc178da82c206aaf130c1f87c368e525a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" podUID="57e668d4-e3df-4b36-ad58-51e5b7f2d16e" Nov 21 13:47:03 crc kubenswrapper[4675]: E1121 13:47:03.914229 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators_1a50974d-f334-4845-b892-5e4b97fc3d79_0(bb62483134a9d587e2b4021c0c523659f43bb9786023c1011d75714af3f8c159): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:47:03 crc kubenswrapper[4675]: E1121 13:47:03.914297 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators_1a50974d-f334-4845-b892-5e4b97fc3d79_0(bb62483134a9d587e2b4021c0c523659f43bb9786023c1011d75714af3f8c159): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:47:03 crc kubenswrapper[4675]: E1121 13:47:03.914322 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators_1a50974d-f334-4845-b892-5e4b97fc3d79_0(bb62483134a9d587e2b4021c0c523659f43bb9786023c1011d75714af3f8c159): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:47:03 crc kubenswrapper[4675]: E1121 13:47:03.914376 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators(1a50974d-f334-4845-b892-5e4b97fc3d79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators(1a50974d-f334-4845-b892-5e4b97fc3d79)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators_1a50974d-f334-4845-b892-5e4b97fc3d79_0(bb62483134a9d587e2b4021c0c523659f43bb9786023c1011d75714af3f8c159): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" podUID="1a50974d-f334-4845-b892-5e4b97fc3d79" Nov 21 13:47:03 crc kubenswrapper[4675]: E1121 13:47:03.938635 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators_320effcd-ec3e-4741-b3d9-e0ec17502e50_0(3350e4b20f0a44c2f50731d6a583123341015370e775e5d23715e62373826d5e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:47:03 crc kubenswrapper[4675]: E1121 13:47:03.938708 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators_320effcd-ec3e-4741-b3d9-e0ec17502e50_0(3350e4b20f0a44c2f50731d6a583123341015370e775e5d23715e62373826d5e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:47:03 crc kubenswrapper[4675]: E1121 13:47:03.938733 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators_320effcd-ec3e-4741-b3d9-e0ec17502e50_0(3350e4b20f0a44c2f50731d6a583123341015370e775e5d23715e62373826d5e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:47:03 crc kubenswrapper[4675]: E1121 13:47:03.938786 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators(320effcd-ec3e-4741-b3d9-e0ec17502e50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators(320effcd-ec3e-4741-b3d9-e0ec17502e50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators_320effcd-ec3e-4741-b3d9-e0ec17502e50_0(3350e4b20f0a44c2f50731d6a583123341015370e775e5d23715e62373826d5e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" podUID="320effcd-ec3e-4741-b3d9-e0ec17502e50" Nov 21 13:47:04 crc kubenswrapper[4675]: I1121 13:47:04.909125 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" event={"ID":"ad07785e-3755-48da-bedc-b41e5db78a13","Type":"ContainerStarted","Data":"1c4ec01efe04708f8f59b3715faa2e962b2a0f1747f196514b9ba8c4c73d311c"} Nov 21 13:47:04 crc kubenswrapper[4675]: I1121 13:47:04.909689 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:47:04 crc kubenswrapper[4675]: I1121 13:47:04.909706 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:47:04 crc kubenswrapper[4675]: I1121 13:47:04.957802 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" podStartSLOduration=29.957785435 podStartE2EDuration="29.957785435s" podCreationTimestamp="2025-11-21 13:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:47:04.952720966 +0000 UTC m=+901.679135693" watchObservedRunningTime="2025-11-21 13:47:04.957785435 +0000 UTC m=+901.684200162" Nov 21 13:47:04 crc kubenswrapper[4675]: I1121 13:47:04.983301 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.491652 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.547929 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.761136 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mlwqg"] Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.761499 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.762013 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.807542 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844"] Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.807921 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.808337 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.814301 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-67wfr"] Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.814395 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.814769 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:47:05 crc kubenswrapper[4675]: E1121 13:47:05.820343 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-mlwqg_openshift-marketplace_db6aaca7-9dda-4eb0-a877-f31bc34bfd8b_0(29dfe8540818a6f05bb655c40881fa3945175ba7994280b66f7e8f8b40019d0a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:47:05 crc kubenswrapper[4675]: E1121 13:47:05.820413 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-mlwqg_openshift-marketplace_db6aaca7-9dda-4eb0-a877-f31bc34bfd8b_0(29dfe8540818a6f05bb655c40881fa3945175ba7994280b66f7e8f8b40019d0a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:47:05 crc kubenswrapper[4675]: E1121 13:47:05.820441 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-mlwqg_openshift-marketplace_db6aaca7-9dda-4eb0-a877-f31bc34bfd8b_0(29dfe8540818a6f05bb655c40881fa3945175ba7994280b66f7e8f8b40019d0a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:47:05 crc kubenswrapper[4675]: E1121 13:47:05.820491 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"community-operators-mlwqg_openshift-marketplace(db6aaca7-9dda-4eb0-a877-f31bc34bfd8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"community-operators-mlwqg_openshift-marketplace(db6aaca7-9dda-4eb0-a877-f31bc34bfd8b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-mlwqg_openshift-marketplace_db6aaca7-9dda-4eb0-a877-f31bc34bfd8b_0(29dfe8540818a6f05bb655c40881fa3945175ba7994280b66f7e8f8b40019d0a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/community-operators-mlwqg" podUID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.843433 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf"] Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.843566 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.867876 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.899134 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-cb77k"] Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.899247 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.899780 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.908182 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22"] Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.908318 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:47:05 crc kubenswrapper[4675]: I1121 13:47:05.908861 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.005286 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-67wfr_openshift-operators_01c95951-b168-42ed-aab7-9ffe813b6d55_0(e6553f9c630b5c29d1f00ea52dede9c980d8ccb83c09a15bfdcbec9a84dbc551): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.005344 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-67wfr_openshift-operators_01c95951-b168-42ed-aab7-9ffe813b6d55_0(e6553f9c630b5c29d1f00ea52dede9c980d8ccb83c09a15bfdcbec9a84dbc551): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.005367 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-67wfr_openshift-operators_01c95951-b168-42ed-aab7-9ffe813b6d55_0(e6553f9c630b5c29d1f00ea52dede9c980d8ccb83c09a15bfdcbec9a84dbc551): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.005405 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-67wfr_openshift-operators(01c95951-b168-42ed-aab7-9ffe813b6d55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-67wfr_openshift-operators(01c95951-b168-42ed-aab7-9ffe813b6d55)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-67wfr_openshift-operators_01c95951-b168-42ed-aab7-9ffe813b6d55_0(e6553f9c630b5c29d1f00ea52dede9c980d8ccb83c09a15bfdcbec9a84dbc551): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-67wfr" podUID="01c95951-b168-42ed-aab7-9ffe813b6d55" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.012213 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators_ce64e510-eca3-48f5-858d-165c3d3cfba7_0(507042d472d21ef874616e76963c51fe59c53ab65753a672643477d638fc8031): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.012280 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators_ce64e510-eca3-48f5-858d-165c3d3cfba7_0(507042d472d21ef874616e76963c51fe59c53ab65753a672643477d638fc8031): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.012309 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators_ce64e510-eca3-48f5-858d-165c3d3cfba7_0(507042d472d21ef874616e76963c51fe59c53ab65753a672643477d638fc8031): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.012358 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators(ce64e510-eca3-48f5-858d-165c3d3cfba7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators(ce64e510-eca3-48f5-858d-165c3d3cfba7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-mx844_openshift-operators_ce64e510-eca3-48f5-858d-165c3d3cfba7_0(507042d472d21ef874616e76963c51fe59c53ab65753a672643477d638fc8031): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" podUID="ce64e510-eca3-48f5-858d-165c3d3cfba7" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.045997 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators_1a50974d-f334-4845-b892-5e4b97fc3d79_0(d98a5e9933526fae446f3547d2340180cf957c4a1129817bb36b386a62c5680f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.046111 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators_1a50974d-f334-4845-b892-5e4b97fc3d79_0(d98a5e9933526fae446f3547d2340180cf957c4a1129817bb36b386a62c5680f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.046141 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators_1a50974d-f334-4845-b892-5e4b97fc3d79_0(d98a5e9933526fae446f3547d2340180cf957c4a1129817bb36b386a62c5680f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.046191 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators(1a50974d-f334-4845-b892-5e4b97fc3d79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators(1a50974d-f334-4845-b892-5e4b97fc3d79)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_openshift-operators_1a50974d-f334-4845-b892-5e4b97fc3d79_0(d98a5e9933526fae446f3547d2340180cf957c4a1129817bb36b386a62c5680f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" podUID="1a50974d-f334-4845-b892-5e4b97fc3d79" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.053809 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators_320effcd-ec3e-4741-b3d9-e0ec17502e50_0(45b2bdf3d74bc5da2ce42a42a01d1d8a310ae730e387fd8c7de60dbc69763d99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.053872 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators_320effcd-ec3e-4741-b3d9-e0ec17502e50_0(45b2bdf3d74bc5da2ce42a42a01d1d8a310ae730e387fd8c7de60dbc69763d99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.053895 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators_320effcd-ec3e-4741-b3d9-e0ec17502e50_0(45b2bdf3d74bc5da2ce42a42a01d1d8a310ae730e387fd8c7de60dbc69763d99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.053935 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators(320effcd-ec3e-4741-b3d9-e0ec17502e50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators(320effcd-ec3e-4741-b3d9-e0ec17502e50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_openshift-operators_320effcd-ec3e-4741-b3d9-e0ec17502e50_0(45b2bdf3d74bc5da2ce42a42a01d1d8a310ae730e387fd8c7de60dbc69763d99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" podUID="320effcd-ec3e-4741-b3d9-e0ec17502e50" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.059988 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-cb77k_openshift-operators_57e668d4-e3df-4b36-ad58-51e5b7f2d16e_0(ce4d23b16ba8c80495d6ebcaba1f17c9365be3a35a6af38101cf8064b106714f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.060033 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-cb77k_openshift-operators_57e668d4-e3df-4b36-ad58-51e5b7f2d16e_0(ce4d23b16ba8c80495d6ebcaba1f17c9365be3a35a6af38101cf8064b106714f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.060053 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-cb77k_openshift-operators_57e668d4-e3df-4b36-ad58-51e5b7f2d16e_0(ce4d23b16ba8c80495d6ebcaba1f17c9365be3a35a6af38101cf8064b106714f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:47:06 crc kubenswrapper[4675]: E1121 13:47:06.060096 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-cb77k_openshift-operators(57e668d4-e3df-4b36-ad58-51e5b7f2d16e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-cb77k_openshift-operators(57e668d4-e3df-4b36-ad58-51e5b7f2d16e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-cb77k_openshift-operators_57e668d4-e3df-4b36-ad58-51e5b7f2d16e_0(ce4d23b16ba8c80495d6ebcaba1f17c9365be3a35a6af38101cf8064b106714f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" podUID="57e668d4-e3df-4b36-ad58-51e5b7f2d16e" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.675870 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tdbwd"] Nov 21 13:47:11 crc kubenswrapper[4675]: E1121 13:47:11.676394 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ebe08f-b2ee-4612-bc20-39e31a66631e" containerName="registry-server" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.676407 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ebe08f-b2ee-4612-bc20-39e31a66631e" containerName="registry-server" Nov 21 13:47:11 crc kubenswrapper[4675]: E1121 13:47:11.676419 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ebe08f-b2ee-4612-bc20-39e31a66631e" containerName="extract-utilities" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.676425 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ebe08f-b2ee-4612-bc20-39e31a66631e" containerName="extract-utilities" Nov 21 13:47:11 crc kubenswrapper[4675]: E1121 13:47:11.676432 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406d0828-fd52-4538-aa6e-f60e4d58f0c0" containerName="registry-server" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.676438 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="406d0828-fd52-4538-aa6e-f60e4d58f0c0" containerName="registry-server" Nov 21 13:47:11 crc kubenswrapper[4675]: E1121 13:47:11.676453 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406d0828-fd52-4538-aa6e-f60e4d58f0c0" containerName="extract-utilities" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.676458 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="406d0828-fd52-4538-aa6e-f60e4d58f0c0" containerName="extract-utilities" Nov 21 13:47:11 crc kubenswrapper[4675]: E1121 13:47:11.676466 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ebe08f-b2ee-4612-bc20-39e31a66631e" containerName="extract-content" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.676472 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ebe08f-b2ee-4612-bc20-39e31a66631e" containerName="extract-content" Nov 21 13:47:11 crc kubenswrapper[4675]: E1121 13:47:11.676483 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406d0828-fd52-4538-aa6e-f60e4d58f0c0" containerName="extract-content" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.676489 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="406d0828-fd52-4538-aa6e-f60e4d58f0c0" containerName="extract-content" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.676602 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ebe08f-b2ee-4612-bc20-39e31a66631e" containerName="registry-server" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.676615 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="406d0828-fd52-4538-aa6e-f60e4d58f0c0" containerName="registry-server" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.677442 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.696983 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tdbwd"] Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.710135 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c09a30-cffb-470d-87d8-e608413b9d87-utilities\") pod \"certified-operators-tdbwd\" (UID: \"e7c09a30-cffb-470d-87d8-e608413b9d87\") " pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.710235 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c09a30-cffb-470d-87d8-e608413b9d87-catalog-content\") pod \"certified-operators-tdbwd\" (UID: \"e7c09a30-cffb-470d-87d8-e608413b9d87\") " pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.710308 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f62f7\" (UniqueName: \"kubernetes.io/projected/e7c09a30-cffb-470d-87d8-e608413b9d87-kube-api-access-f62f7\") pod \"certified-operators-tdbwd\" (UID: \"e7c09a30-cffb-470d-87d8-e608413b9d87\") " pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.811980 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c09a30-cffb-470d-87d8-e608413b9d87-catalog-content\") pod \"certified-operators-tdbwd\" (UID: \"e7c09a30-cffb-470d-87d8-e608413b9d87\") " pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.812049 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f62f7\" (UniqueName: \"kubernetes.io/projected/e7c09a30-cffb-470d-87d8-e608413b9d87-kube-api-access-f62f7\") pod \"certified-operators-tdbwd\" (UID: \"e7c09a30-cffb-470d-87d8-e608413b9d87\") " pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.812139 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c09a30-cffb-470d-87d8-e608413b9d87-utilities\") pod \"certified-operators-tdbwd\" (UID: \"e7c09a30-cffb-470d-87d8-e608413b9d87\") " pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.812479 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c09a30-cffb-470d-87d8-e608413b9d87-catalog-content\") pod \"certified-operators-tdbwd\" (UID: \"e7c09a30-cffb-470d-87d8-e608413b9d87\") " pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.812488 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c09a30-cffb-470d-87d8-e608413b9d87-utilities\") pod \"certified-operators-tdbwd\" (UID: \"e7c09a30-cffb-470d-87d8-e608413b9d87\") " pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.835974 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f62f7\" (UniqueName: \"kubernetes.io/projected/e7c09a30-cffb-470d-87d8-e608413b9d87-kube-api-access-f62f7\") pod \"certified-operators-tdbwd\" (UID: \"e7c09a30-cffb-470d-87d8-e608413b9d87\") " pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:11 crc kubenswrapper[4675]: I1121 13:47:11.995118 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:12 crc kubenswrapper[4675]: I1121 13:47:12.462267 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tdbwd"] Nov 21 13:47:13 crc kubenswrapper[4675]: I1121 13:47:13.006018 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdbwd" event={"ID":"e7c09a30-cffb-470d-87d8-e608413b9d87","Type":"ContainerStarted","Data":"c0091850029296093a326737b80d6aa5721bb3480a9addbaf5f1d589ded2ced6"} Nov 21 13:47:13 crc kubenswrapper[4675]: E1121 13:47:13.179016 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7c09a30_cffb_470d_87d8_e608413b9d87.slice/crio-2c21bdc708f9ec1c91e9156e47c514cd6fc2f165cd2011a92dc1e622ce22e4cc.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:47:14 crc kubenswrapper[4675]: I1121 13:47:14.014594 4675 generic.go:334] "Generic (PLEG): container finished" podID="e7c09a30-cffb-470d-87d8-e608413b9d87" containerID="2c21bdc708f9ec1c91e9156e47c514cd6fc2f165cd2011a92dc1e622ce22e4cc" exitCode=0 Nov 21 13:47:14 crc kubenswrapper[4675]: I1121 13:47:14.014645 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdbwd" event={"ID":"e7c09a30-cffb-470d-87d8-e608413b9d87","Type":"ContainerDied","Data":"2c21bdc708f9ec1c91e9156e47c514cd6fc2f165cd2011a92dc1e622ce22e4cc"} Nov 21 13:47:16 crc kubenswrapper[4675]: I1121 13:47:16.848179 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:47:16 crc kubenswrapper[4675]: I1121 13:47:16.849938 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:47:17 crc kubenswrapper[4675]: I1121 13:47:17.147047 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-cb77k"] Nov 21 13:47:17 crc kubenswrapper[4675]: W1121 13:47:17.161511 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57e668d4_e3df_4b36_ad58_51e5b7f2d16e.slice/crio-75d7bcb5d01533f64e3f2431bbb350d674b3707bbc326217dc3e83ec0c30d816 WatchSource:0}: Error finding container 75d7bcb5d01533f64e3f2431bbb350d674b3707bbc326217dc3e83ec0c30d816: Status 404 returned error can't find the container with id 75d7bcb5d01533f64e3f2431bbb350d674b3707bbc326217dc3e83ec0c30d816 Nov 21 13:47:18 crc kubenswrapper[4675]: I1121 13:47:18.038872 4675 generic.go:334] "Generic (PLEG): container finished" podID="e7c09a30-cffb-470d-87d8-e608413b9d87" containerID="650213983ac2518f8dd1bea86ece99448bab7db5aaf2e57983f7b2ad58a00f50" exitCode=0 Nov 21 13:47:18 crc kubenswrapper[4675]: I1121 13:47:18.038944 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdbwd" event={"ID":"e7c09a30-cffb-470d-87d8-e608413b9d87","Type":"ContainerDied","Data":"650213983ac2518f8dd1bea86ece99448bab7db5aaf2e57983f7b2ad58a00f50"} Nov 21 13:47:18 crc kubenswrapper[4675]: I1121 13:47:18.039838 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" event={"ID":"57e668d4-e3df-4b36-ad58-51e5b7f2d16e","Type":"ContainerStarted","Data":"75d7bcb5d01533f64e3f2431bbb350d674b3707bbc326217dc3e83ec0c30d816"} Nov 21 13:47:18 crc kubenswrapper[4675]: I1121 13:47:18.848525 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:47:18 crc kubenswrapper[4675]: I1121 13:47:18.848595 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:47:18 crc kubenswrapper[4675]: I1121 13:47:18.849348 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:47:18 crc kubenswrapper[4675]: I1121 13:47:18.849576 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" Nov 21 13:47:19 crc kubenswrapper[4675]: I1121 13:47:19.233174 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf"] Nov 21 13:47:19 crc kubenswrapper[4675]: I1121 13:47:19.311227 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-67wfr"] Nov 21 13:47:19 crc kubenswrapper[4675]: W1121 13:47:19.318921 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01c95951_b168_42ed_aab7_9ffe813b6d55.slice/crio-d11477eb381d7632065cd1a31dd813703f23b5c75d1c8385a913090e82067e9b WatchSource:0}: Error finding container d11477eb381d7632065cd1a31dd813703f23b5c75d1c8385a913090e82067e9b: Status 404 returned error can't find the container with id d11477eb381d7632065cd1a31dd813703f23b5c75d1c8385a913090e82067e9b Nov 21 13:47:19 crc kubenswrapper[4675]: I1121 13:47:19.848393 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:47:19 crc kubenswrapper[4675]: I1121 13:47:19.848897 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:47:20 crc kubenswrapper[4675]: I1121 13:47:20.095181 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" event={"ID":"320effcd-ec3e-4741-b3d9-e0ec17502e50","Type":"ContainerStarted","Data":"35276b26c214a1de93836f24b9d2685e51426c41abdcfe9de9948185d06399f4"} Nov 21 13:47:20 crc kubenswrapper[4675]: I1121 13:47:20.097352 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdbwd" event={"ID":"e7c09a30-cffb-470d-87d8-e608413b9d87","Type":"ContainerStarted","Data":"9fb7cd1cabf7fa84a1f18e4f0b73e6a5e4e8e938a96d29abf67f35466982ab03"} Nov 21 13:47:20 crc kubenswrapper[4675]: I1121 13:47:20.099413 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-67wfr" event={"ID":"01c95951-b168-42ed-aab7-9ffe813b6d55","Type":"ContainerStarted","Data":"d11477eb381d7632065cd1a31dd813703f23b5c75d1c8385a913090e82067e9b"} Nov 21 13:47:20 crc kubenswrapper[4675]: I1121 13:47:20.118583 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tdbwd" podStartSLOduration=4.156406188 podStartE2EDuration="9.118564822s" podCreationTimestamp="2025-11-21 13:47:11 +0000 UTC" firstStartedPulling="2025-11-21 13:47:14.016422514 +0000 UTC m=+910.742837261" lastFinishedPulling="2025-11-21 13:47:18.978581178 +0000 UTC m=+915.704995895" observedRunningTime="2025-11-21 13:47:20.116332755 +0000 UTC m=+916.842747502" watchObservedRunningTime="2025-11-21 13:47:20.118564822 +0000 UTC m=+916.844979549" Nov 21 13:47:20 crc kubenswrapper[4675]: I1121 13:47:20.290294 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mlwqg"] Nov 21 13:47:20 crc kubenswrapper[4675]: I1121 13:47:20.849582 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:47:20 crc kubenswrapper[4675]: I1121 13:47:20.849794 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:47:20 crc kubenswrapper[4675]: I1121 13:47:20.850459 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" Nov 21 13:47:20 crc kubenswrapper[4675]: I1121 13:47:20.850555 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" Nov 21 13:47:21 crc kubenswrapper[4675]: I1121 13:47:21.118702 4675 generic.go:334] "Generic (PLEG): container finished" podID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" containerID="a7d260ee238f45d72f9ac67ca32456a84d57e2ca0592a7beed63cf550d58bc40" exitCode=0 Nov 21 13:47:21 crc kubenswrapper[4675]: I1121 13:47:21.118814 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlwqg" event={"ID":"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b","Type":"ContainerDied","Data":"a7d260ee238f45d72f9ac67ca32456a84d57e2ca0592a7beed63cf550d58bc40"} Nov 21 13:47:21 crc kubenswrapper[4675]: I1121 13:47:21.118855 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlwqg" event={"ID":"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b","Type":"ContainerStarted","Data":"d3ec9c05d30908502dfda2e1b5563b60c17649b4d3d458dd7b5ed2c99b09e5ae"} Nov 21 13:47:21 crc kubenswrapper[4675]: I1121 13:47:21.995496 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:21 crc kubenswrapper[4675]: I1121 13:47:21.995546 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:22 crc kubenswrapper[4675]: I1121 13:47:22.036517 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:27 crc kubenswrapper[4675]: I1121 13:47:27.474012 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844"] Nov 21 13:47:27 crc kubenswrapper[4675]: W1121 13:47:27.489307 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce64e510_eca3_48f5_858d_165c3d3cfba7.slice/crio-36735ca56452f835bdd03353ced8c18888c20875e134e85ab7ae979317be6866 WatchSource:0}: Error finding container 36735ca56452f835bdd03353ced8c18888c20875e134e85ab7ae979317be6866: Status 404 returned error can't find the container with id 36735ca56452f835bdd03353ced8c18888c20875e134e85ab7ae979317be6866 Nov 21 13:47:27 crc kubenswrapper[4675]: I1121 13:47:27.739552 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22"] Nov 21 13:47:27 crc kubenswrapper[4675]: W1121 13:47:27.747777 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a50974d_f334_4845_b892_5e4b97fc3d79.slice/crio-b675ff0a28f69ecacb3593eb42bd245bed4b634c63fabd11b787cf9569c25a6c WatchSource:0}: Error finding container b675ff0a28f69ecacb3593eb42bd245bed4b634c63fabd11b787cf9569c25a6c: Status 404 returned error can't find the container with id b675ff0a28f69ecacb3593eb42bd245bed4b634c63fabd11b787cf9569c25a6c Nov 21 13:47:28 crc kubenswrapper[4675]: I1121 13:47:28.172231 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" event={"ID":"57e668d4-e3df-4b36-ad58-51e5b7f2d16e","Type":"ContainerStarted","Data":"a64eaae39eb979269a841499c3ec259f8dde89e0f07323925da4ef290c86c511"} Nov 21 13:47:28 crc kubenswrapper[4675]: I1121 13:47:28.172683 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:47:28 crc kubenswrapper[4675]: I1121 13:47:28.176163 4675 generic.go:334] "Generic (PLEG): container finished" podID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" containerID="4b8d6dcc481fd380e993661d4818b934cd2c4c0c232b617533a122e14f8acb46" exitCode=0 Nov 21 13:47:28 crc kubenswrapper[4675]: I1121 13:47:28.176233 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlwqg" event={"ID":"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b","Type":"ContainerDied","Data":"4b8d6dcc481fd380e993661d4818b934cd2c4c0c232b617533a122e14f8acb46"} Nov 21 13:47:28 crc kubenswrapper[4675]: I1121 13:47:28.179031 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" event={"ID":"ce64e510-eca3-48f5-858d-165c3d3cfba7","Type":"ContainerStarted","Data":"36735ca56452f835bdd03353ced8c18888c20875e134e85ab7ae979317be6866"} Nov 21 13:47:28 crc kubenswrapper[4675]: I1121 13:47:28.183590 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" Nov 21 13:47:28 crc kubenswrapper[4675]: I1121 13:47:28.185190 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" event={"ID":"1a50974d-f334-4845-b892-5e4b97fc3d79","Type":"ContainerStarted","Data":"350d78b64cb62c25be00f3a70e72fe05fe5452f3e982308cfbbaa6d4340dd30e"} Nov 21 13:47:28 crc kubenswrapper[4675]: I1121 13:47:28.185235 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" event={"ID":"1a50974d-f334-4845-b892-5e4b97fc3d79","Type":"ContainerStarted","Data":"b675ff0a28f69ecacb3593eb42bd245bed4b634c63fabd11b787cf9569c25a6c"} Nov 21 13:47:28 crc kubenswrapper[4675]: I1121 13:47:28.188887 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-67wfr" event={"ID":"01c95951-b168-42ed-aab7-9ffe813b6d55","Type":"ContainerStarted","Data":"2b1428b18bf0d4eb6215adca958204c2eb1be1149d9f64ab0bbf7fd4ac195cb2"} Nov 21 13:47:28 crc kubenswrapper[4675]: I1121 13:47:28.189151 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:47:28 crc kubenswrapper[4675]: I1121 13:47:28.193778 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" event={"ID":"320effcd-ec3e-4741-b3d9-e0ec17502e50","Type":"ContainerStarted","Data":"291c8b0408884c9778b88a082bd9b4d8640ffd53999d65160575c2f363529cb3"} Nov 21 13:47:28 crc kubenswrapper[4675]: I1121 13:47:28.198521 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-cb77k" podStartSLOduration=29.015987822 podStartE2EDuration="39.198500168s" podCreationTimestamp="2025-11-21 13:46:49 +0000 UTC" firstStartedPulling="2025-11-21 13:47:17.165514225 +0000 UTC m=+913.891928972" lastFinishedPulling="2025-11-21 13:47:27.348026591 +0000 UTC m=+924.074441318" observedRunningTime="2025-11-21 13:47:28.193027419 +0000 UTC m=+924.919442156" watchObservedRunningTime="2025-11-21 13:47:28.198500168 +0000 UTC m=+924.924914895" Nov 21 13:47:28 crc kubenswrapper[4675]: I1121 13:47:28.244790 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-67wfr" podStartSLOduration=31.214611598 podStartE2EDuration="39.244774777s" podCreationTimestamp="2025-11-21 13:46:49 +0000 UTC" firstStartedPulling="2025-11-21 13:47:19.321802842 +0000 UTC m=+916.048217569" lastFinishedPulling="2025-11-21 13:47:27.351966021 +0000 UTC m=+924.078380748" observedRunningTime="2025-11-21 13:47:28.242253443 +0000 UTC m=+924.968668190" watchObservedRunningTime="2025-11-21 13:47:28.244774777 +0000 UTC m=+924.971189504" Nov 21 13:47:28 crc kubenswrapper[4675]: I1121 13:47:28.270006 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf" podStartSLOduration=32.23432037 podStartE2EDuration="40.269986769s" podCreationTimestamp="2025-11-21 13:46:48 +0000 UTC" firstStartedPulling="2025-11-21 13:47:19.258597662 +0000 UTC m=+915.985012389" lastFinishedPulling="2025-11-21 13:47:27.294264061 +0000 UTC m=+924.020678788" observedRunningTime="2025-11-21 13:47:28.267956968 +0000 UTC m=+924.994371715" watchObservedRunningTime="2025-11-21 13:47:28.269986769 +0000 UTC m=+924.996401496" Nov 21 13:47:29 crc kubenswrapper[4675]: I1121 13:47:29.222085 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22" podStartSLOduration=41.222053236 podStartE2EDuration="41.222053236s" podCreationTimestamp="2025-11-21 13:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:47:29.22102443 +0000 UTC m=+925.947439167" watchObservedRunningTime="2025-11-21 13:47:29.222053236 +0000 UTC m=+925.948467963" Nov 21 13:47:31 crc kubenswrapper[4675]: I1121 13:47:31.215204 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlwqg" event={"ID":"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b","Type":"ContainerStarted","Data":"3e931e96f6e2aedcfff665eb7365e1eefdf0716a498ff27779b048b2f5b94d24"} Nov 21 13:47:31 crc kubenswrapper[4675]: I1121 13:47:31.235918 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mlwqg" podStartSLOduration=46.396360266 podStartE2EDuration="52.235901693s" podCreationTimestamp="2025-11-21 13:46:39 +0000 UTC" firstStartedPulling="2025-11-21 13:47:24.218636831 +0000 UTC m=+920.945051568" lastFinishedPulling="2025-11-21 13:47:30.058178258 +0000 UTC m=+926.784592995" observedRunningTime="2025-11-21 13:47:31.235208385 +0000 UTC m=+927.961623112" watchObservedRunningTime="2025-11-21 13:47:31.235901693 +0000 UTC m=+927.962316430" Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.030350 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.069660 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tdbwd"] Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.222080 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" event={"ID":"ce64e510-eca3-48f5-858d-165c3d3cfba7","Type":"ContainerStarted","Data":"b3343077e32e5ed3557fdc712a864821305a80497621c30e99b42e42cfbb59ae"} Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.222585 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tdbwd" podUID="e7c09a30-cffb-470d-87d8-e608413b9d87" containerName="registry-server" containerID="cri-o://9fb7cd1cabf7fa84a1f18e4f0b73e6a5e4e8e938a96d29abf67f35466982ab03" gracePeriod=2 Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.242605 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-mx844" podStartSLOduration=39.916451933 podStartE2EDuration="44.242583641s" podCreationTimestamp="2025-11-21 13:46:48 +0000 UTC" firstStartedPulling="2025-11-21 13:47:27.491519937 +0000 UTC m=+924.217934664" lastFinishedPulling="2025-11-21 13:47:31.817651645 +0000 UTC m=+928.544066372" observedRunningTime="2025-11-21 13:47:32.238807055 +0000 UTC m=+928.965221782" watchObservedRunningTime="2025-11-21 13:47:32.242583641 +0000 UTC m=+928.968998398" Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.583633 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.602950 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c09a30-cffb-470d-87d8-e608413b9d87-utilities\") pod \"e7c09a30-cffb-470d-87d8-e608413b9d87\" (UID: \"e7c09a30-cffb-470d-87d8-e608413b9d87\") " Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.603022 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f62f7\" (UniqueName: \"kubernetes.io/projected/e7c09a30-cffb-470d-87d8-e608413b9d87-kube-api-access-f62f7\") pod \"e7c09a30-cffb-470d-87d8-e608413b9d87\" (UID: \"e7c09a30-cffb-470d-87d8-e608413b9d87\") " Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.603146 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c09a30-cffb-470d-87d8-e608413b9d87-catalog-content\") pod \"e7c09a30-cffb-470d-87d8-e608413b9d87\" (UID: \"e7c09a30-cffb-470d-87d8-e608413b9d87\") " Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.603677 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7c09a30-cffb-470d-87d8-e608413b9d87-utilities" (OuterVolumeSpecName: "utilities") pod "e7c09a30-cffb-470d-87d8-e608413b9d87" (UID: "e7c09a30-cffb-470d-87d8-e608413b9d87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.623928 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c09a30-cffb-470d-87d8-e608413b9d87-kube-api-access-f62f7" (OuterVolumeSpecName: "kube-api-access-f62f7") pod "e7c09a30-cffb-470d-87d8-e608413b9d87" (UID: "e7c09a30-cffb-470d-87d8-e608413b9d87"). InnerVolumeSpecName "kube-api-access-f62f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.651920 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7c09a30-cffb-470d-87d8-e608413b9d87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7c09a30-cffb-470d-87d8-e608413b9d87" (UID: "e7c09a30-cffb-470d-87d8-e608413b9d87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.704795 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c09a30-cffb-470d-87d8-e608413b9d87-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.704830 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c09a30-cffb-470d-87d8-e608413b9d87-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:32 crc kubenswrapper[4675]: I1121 13:47:32.704840 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f62f7\" (UniqueName: \"kubernetes.io/projected/e7c09a30-cffb-470d-87d8-e608413b9d87-kube-api-access-f62f7\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.230292 4675 generic.go:334] "Generic (PLEG): container finished" podID="e7c09a30-cffb-470d-87d8-e608413b9d87" containerID="9fb7cd1cabf7fa84a1f18e4f0b73e6a5e4e8e938a96d29abf67f35466982ab03" exitCode=0 Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.230369 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdbwd" Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.230391 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdbwd" event={"ID":"e7c09a30-cffb-470d-87d8-e608413b9d87","Type":"ContainerDied","Data":"9fb7cd1cabf7fa84a1f18e4f0b73e6a5e4e8e938a96d29abf67f35466982ab03"} Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.230469 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdbwd" event={"ID":"e7c09a30-cffb-470d-87d8-e608413b9d87","Type":"ContainerDied","Data":"c0091850029296093a326737b80d6aa5721bb3480a9addbaf5f1d589ded2ced6"} Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.230511 4675 scope.go:117] "RemoveContainer" containerID="9fb7cd1cabf7fa84a1f18e4f0b73e6a5e4e8e938a96d29abf67f35466982ab03" Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.248357 4675 scope.go:117] "RemoveContainer" containerID="650213983ac2518f8dd1bea86ece99448bab7db5aaf2e57983f7b2ad58a00f50" Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.252826 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tdbwd"] Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.259587 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tdbwd"] Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.270254 4675 scope.go:117] "RemoveContainer" containerID="2c21bdc708f9ec1c91e9156e47c514cd6fc2f165cd2011a92dc1e622ce22e4cc" Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.292052 4675 scope.go:117] "RemoveContainer" containerID="9fb7cd1cabf7fa84a1f18e4f0b73e6a5e4e8e938a96d29abf67f35466982ab03" Nov 21 13:47:33 crc kubenswrapper[4675]: E1121 13:47:33.293259 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb7cd1cabf7fa84a1f18e4f0b73e6a5e4e8e938a96d29abf67f35466982ab03\": container with ID starting with 9fb7cd1cabf7fa84a1f18e4f0b73e6a5e4e8e938a96d29abf67f35466982ab03 not found: ID does not exist" containerID="9fb7cd1cabf7fa84a1f18e4f0b73e6a5e4e8e938a96d29abf67f35466982ab03" Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.293345 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb7cd1cabf7fa84a1f18e4f0b73e6a5e4e8e938a96d29abf67f35466982ab03"} err="failed to get container status \"9fb7cd1cabf7fa84a1f18e4f0b73e6a5e4e8e938a96d29abf67f35466982ab03\": rpc error: code = NotFound desc = could not find container \"9fb7cd1cabf7fa84a1f18e4f0b73e6a5e4e8e938a96d29abf67f35466982ab03\": container with ID starting with 9fb7cd1cabf7fa84a1f18e4f0b73e6a5e4e8e938a96d29abf67f35466982ab03 not found: ID does not exist" Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.293440 4675 scope.go:117] "RemoveContainer" containerID="650213983ac2518f8dd1bea86ece99448bab7db5aaf2e57983f7b2ad58a00f50" Nov 21 13:47:33 crc kubenswrapper[4675]: E1121 13:47:33.294441 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"650213983ac2518f8dd1bea86ece99448bab7db5aaf2e57983f7b2ad58a00f50\": container with ID starting with 650213983ac2518f8dd1bea86ece99448bab7db5aaf2e57983f7b2ad58a00f50 not found: ID does not exist" containerID="650213983ac2518f8dd1bea86ece99448bab7db5aaf2e57983f7b2ad58a00f50" Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.294464 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650213983ac2518f8dd1bea86ece99448bab7db5aaf2e57983f7b2ad58a00f50"} err="failed to get container status \"650213983ac2518f8dd1bea86ece99448bab7db5aaf2e57983f7b2ad58a00f50\": rpc error: code = NotFound desc = could not find container \"650213983ac2518f8dd1bea86ece99448bab7db5aaf2e57983f7b2ad58a00f50\": container with ID starting with 650213983ac2518f8dd1bea86ece99448bab7db5aaf2e57983f7b2ad58a00f50 not found: ID does not exist" Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.294487 4675 scope.go:117] "RemoveContainer" containerID="2c21bdc708f9ec1c91e9156e47c514cd6fc2f165cd2011a92dc1e622ce22e4cc" Nov 21 13:47:33 crc kubenswrapper[4675]: E1121 13:47:33.295536 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c21bdc708f9ec1c91e9156e47c514cd6fc2f165cd2011a92dc1e622ce22e4cc\": container with ID starting with 2c21bdc708f9ec1c91e9156e47c514cd6fc2f165cd2011a92dc1e622ce22e4cc not found: ID does not exist" containerID="2c21bdc708f9ec1c91e9156e47c514cd6fc2f165cd2011a92dc1e622ce22e4cc" Nov 21 13:47:33 crc kubenswrapper[4675]: I1121 13:47:33.295570 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c21bdc708f9ec1c91e9156e47c514cd6fc2f165cd2011a92dc1e622ce22e4cc"} err="failed to get container status \"2c21bdc708f9ec1c91e9156e47c514cd6fc2f165cd2011a92dc1e622ce22e4cc\": rpc error: code = NotFound desc = could not find container \"2c21bdc708f9ec1c91e9156e47c514cd6fc2f165cd2011a92dc1e622ce22e4cc\": container with ID starting with 2c21bdc708f9ec1c91e9156e47c514cd6fc2f165cd2011a92dc1e622ce22e4cc not found: ID does not exist" Nov 21 13:47:34 crc kubenswrapper[4675]: I1121 13:47:34.856945 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7c09a30-cffb-470d-87d8-e608413b9d87" path="/var/lib/kubelet/pods/e7c09a30-cffb-470d-87d8-e608413b9d87/volumes" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.103101 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-bqb55"] Nov 21 13:47:35 crc kubenswrapper[4675]: E1121 13:47:35.103315 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c09a30-cffb-470d-87d8-e608413b9d87" containerName="extract-content" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.103326 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c09a30-cffb-470d-87d8-e608413b9d87" containerName="extract-content" Nov 21 13:47:35 crc kubenswrapper[4675]: E1121 13:47:35.103334 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c09a30-cffb-470d-87d8-e608413b9d87" containerName="registry-server" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.103340 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c09a30-cffb-470d-87d8-e608413b9d87" containerName="registry-server" Nov 21 13:47:35 crc kubenswrapper[4675]: E1121 13:47:35.103367 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c09a30-cffb-470d-87d8-e608413b9d87" containerName="extract-utilities" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.103373 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c09a30-cffb-470d-87d8-e608413b9d87" containerName="extract-utilities" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.103574 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c09a30-cffb-470d-87d8-e608413b9d87" containerName="registry-server" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.103964 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-bqb55" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.106605 4675 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2fjh7" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.106640 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.106703 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.114779 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-2c6xq"] Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.115856 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-2c6xq" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.118714 4675 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-x8vz9" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.118940 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-bqb55"] Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.128329 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-2c6xq"] Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.143248 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24gg5\" (UniqueName: \"kubernetes.io/projected/054749e0-ba55-43d1-a8d0-3cca3a0b15cf-kube-api-access-24gg5\") pod \"cert-manager-cainjector-7f985d654d-bqb55\" (UID: \"054749e0-ba55-43d1-a8d0-3cca3a0b15cf\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-bqb55" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.143291 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvslt\" (UniqueName: \"kubernetes.io/projected/5ffe60a3-3d75-49c3-9340-0680d558e18b-kube-api-access-wvslt\") pod \"cert-manager-5b446d88c5-2c6xq\" (UID: \"5ffe60a3-3d75-49c3-9340-0680d558e18b\") " pod="cert-manager/cert-manager-5b446d88c5-2c6xq" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.143452 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-r47qh"] Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.144250 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-r47qh" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.147781 4675 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-mzxhv" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.155696 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-r47qh"] Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.244500 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24gg5\" (UniqueName: \"kubernetes.io/projected/054749e0-ba55-43d1-a8d0-3cca3a0b15cf-kube-api-access-24gg5\") pod \"cert-manager-cainjector-7f985d654d-bqb55\" (UID: \"054749e0-ba55-43d1-a8d0-3cca3a0b15cf\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-bqb55" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.244540 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvslt\" (UniqueName: \"kubernetes.io/projected/5ffe60a3-3d75-49c3-9340-0680d558e18b-kube-api-access-wvslt\") pod \"cert-manager-5b446d88c5-2c6xq\" (UID: \"5ffe60a3-3d75-49c3-9340-0680d558e18b\") " pod="cert-manager/cert-manager-5b446d88c5-2c6xq" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.244636 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qp95\" (UniqueName: \"kubernetes.io/projected/0ba7a123-c240-4c8d-bd82-974e63a888cf-kube-api-access-6qp95\") pod \"cert-manager-webhook-5655c58dd6-r47qh\" (UID: \"0ba7a123-c240-4c8d-bd82-974e63a888cf\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-r47qh" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.263396 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24gg5\" (UniqueName: \"kubernetes.io/projected/054749e0-ba55-43d1-a8d0-3cca3a0b15cf-kube-api-access-24gg5\") pod \"cert-manager-cainjector-7f985d654d-bqb55\" (UID: \"054749e0-ba55-43d1-a8d0-3cca3a0b15cf\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-bqb55" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.264206 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvslt\" (UniqueName: \"kubernetes.io/projected/5ffe60a3-3d75-49c3-9340-0680d558e18b-kube-api-access-wvslt\") pod \"cert-manager-5b446d88c5-2c6xq\" (UID: \"5ffe60a3-3d75-49c3-9340-0680d558e18b\") " pod="cert-manager/cert-manager-5b446d88c5-2c6xq" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.345526 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qp95\" (UniqueName: \"kubernetes.io/projected/0ba7a123-c240-4c8d-bd82-974e63a888cf-kube-api-access-6qp95\") pod \"cert-manager-webhook-5655c58dd6-r47qh\" (UID: \"0ba7a123-c240-4c8d-bd82-974e63a888cf\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-r47qh" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.365727 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qp95\" (UniqueName: \"kubernetes.io/projected/0ba7a123-c240-4c8d-bd82-974e63a888cf-kube-api-access-6qp95\") pod \"cert-manager-webhook-5655c58dd6-r47qh\" (UID: \"0ba7a123-c240-4c8d-bd82-974e63a888cf\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-r47qh" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.422458 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-bqb55" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.433092 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-2c6xq" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.459739 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-r47qh" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.542160 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n7bcx" Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.747394 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-bqb55"] Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.851780 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-r47qh"] Nov 21 13:47:35 crc kubenswrapper[4675]: I1121 13:47:35.867342 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-2c6xq"] Nov 21 13:47:36 crc kubenswrapper[4675]: I1121 13:47:36.250404 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-bqb55" event={"ID":"054749e0-ba55-43d1-a8d0-3cca3a0b15cf","Type":"ContainerStarted","Data":"77e0b7b0ef2cf0fb20ea1633f2d2b4de0e03a4b06c835d31c80bda1a48458db5"} Nov 21 13:47:36 crc kubenswrapper[4675]: I1121 13:47:36.251339 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-2c6xq" event={"ID":"5ffe60a3-3d75-49c3-9340-0680d558e18b","Type":"ContainerStarted","Data":"4eab68437b277fe703278a197eb3c781943e2d2f1998f7fa954821fe664bd629"} Nov 21 13:47:36 crc kubenswrapper[4675]: I1121 13:47:36.252454 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-r47qh" event={"ID":"0ba7a123-c240-4c8d-bd82-974e63a888cf","Type":"ContainerStarted","Data":"b2fa322b1abb538e821c90e123fe0a7d5b0aac73a871ef969d43a0f9b449cfc4"} Nov 21 13:47:39 crc kubenswrapper[4675]: I1121 13:47:39.732786 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-67wfr" Nov 21 13:47:40 crc kubenswrapper[4675]: I1121 13:47:40.137896 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:47:40 crc kubenswrapper[4675]: I1121 13:47:40.137941 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:47:40 crc kubenswrapper[4675]: I1121 13:47:40.191478 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:47:40 crc kubenswrapper[4675]: I1121 13:47:40.334780 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:47:41 crc kubenswrapper[4675]: I1121 13:47:41.289439 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-r47qh" event={"ID":"0ba7a123-c240-4c8d-bd82-974e63a888cf","Type":"ContainerStarted","Data":"20c78c7fc5291a7b986b0c16ca69c4f57f8ce87e52963a2d899a234a183b542a"} Nov 21 13:47:41 crc kubenswrapper[4675]: I1121 13:47:41.289595 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-r47qh" Nov 21 13:47:41 crc kubenswrapper[4675]: I1121 13:47:41.290683 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-bqb55" event={"ID":"054749e0-ba55-43d1-a8d0-3cca3a0b15cf","Type":"ContainerStarted","Data":"a31cbf345cdaea0646d394fc4bc20bea889d8b71d7c4df764c7a53f09db09d30"} Nov 21 13:47:41 crc kubenswrapper[4675]: I1121 13:47:41.292241 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-2c6xq" event={"ID":"5ffe60a3-3d75-49c3-9340-0680d558e18b","Type":"ContainerStarted","Data":"54e5388cb69c2ca20c70329bab6ee8c5390225f55b776d36e6b39d5e74a9e496"} Nov 21 13:47:41 crc kubenswrapper[4675]: I1121 13:47:41.313142 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-r47qh" podStartSLOduration=1.9991035959999999 podStartE2EDuration="6.313025393s" podCreationTimestamp="2025-11-21 13:47:35 +0000 UTC" firstStartedPulling="2025-11-21 13:47:35.855401367 +0000 UTC m=+932.581816094" lastFinishedPulling="2025-11-21 13:47:40.169323164 +0000 UTC m=+936.895737891" observedRunningTime="2025-11-21 13:47:41.308671652 +0000 UTC m=+938.035086389" watchObservedRunningTime="2025-11-21 13:47:41.313025393 +0000 UTC m=+938.039440120" Nov 21 13:47:41 crc kubenswrapper[4675]: I1121 13:47:41.323519 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-2c6xq" podStartSLOduration=1.9934663019999999 podStartE2EDuration="6.323501959s" podCreationTimestamp="2025-11-21 13:47:35 +0000 UTC" firstStartedPulling="2025-11-21 13:47:35.857161952 +0000 UTC m=+932.583576679" lastFinishedPulling="2025-11-21 13:47:40.187197609 +0000 UTC m=+936.913612336" observedRunningTime="2025-11-21 13:47:41.322154745 +0000 UTC m=+938.048569472" watchObservedRunningTime="2025-11-21 13:47:41.323501959 +0000 UTC m=+938.049916686" Nov 21 13:47:41 crc kubenswrapper[4675]: I1121 13:47:41.338634 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-bqb55" podStartSLOduration=1.944373552 podStartE2EDuration="6.338617975s" podCreationTimestamp="2025-11-21 13:47:35 +0000 UTC" firstStartedPulling="2025-11-21 13:47:35.767800635 +0000 UTC m=+932.494215362" lastFinishedPulling="2025-11-21 13:47:40.162045058 +0000 UTC m=+936.888459785" observedRunningTime="2025-11-21 13:47:41.335553537 +0000 UTC m=+938.061968274" watchObservedRunningTime="2025-11-21 13:47:41.338617975 +0000 UTC m=+938.065032702" Nov 21 13:47:45 crc kubenswrapper[4675]: I1121 13:47:45.465817 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-r47qh" Nov 21 13:47:46 crc kubenswrapper[4675]: I1121 13:47:46.281003 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mlwqg"] Nov 21 13:47:46 crc kubenswrapper[4675]: I1121 13:47:46.281393 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mlwqg" podUID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" containerName="registry-server" containerID="cri-o://3e931e96f6e2aedcfff665eb7365e1eefdf0716a498ff27779b048b2f5b94d24" gracePeriod=2 Nov 21 13:47:46 crc kubenswrapper[4675]: I1121 13:47:46.660670 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:47:46 crc kubenswrapper[4675]: I1121 13:47:46.833496 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-catalog-content\") pod \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\" (UID: \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\") " Nov 21 13:47:46 crc kubenswrapper[4675]: I1121 13:47:46.833659 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7tt5\" (UniqueName: \"kubernetes.io/projected/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-kube-api-access-t7tt5\") pod \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\" (UID: \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\") " Nov 21 13:47:46 crc kubenswrapper[4675]: I1121 13:47:46.833722 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-utilities\") pod \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\" (UID: \"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b\") " Nov 21 13:47:46 crc kubenswrapper[4675]: I1121 13:47:46.834742 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-utilities" (OuterVolumeSpecName: "utilities") pod "db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" (UID: "db6aaca7-9dda-4eb0-a877-f31bc34bfd8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:47:46 crc kubenswrapper[4675]: I1121 13:47:46.841315 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-kube-api-access-t7tt5" (OuterVolumeSpecName: "kube-api-access-t7tt5") pod "db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" (UID: "db6aaca7-9dda-4eb0-a877-f31bc34bfd8b"). InnerVolumeSpecName "kube-api-access-t7tt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:47:46 crc kubenswrapper[4675]: I1121 13:47:46.889695 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" (UID: "db6aaca7-9dda-4eb0-a877-f31bc34bfd8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:47:46 crc kubenswrapper[4675]: I1121 13:47:46.934846 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7tt5\" (UniqueName: \"kubernetes.io/projected/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-kube-api-access-t7tt5\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:46 crc kubenswrapper[4675]: I1121 13:47:46.934891 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:46 crc kubenswrapper[4675]: I1121 13:47:46.934904 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.327494 4675 generic.go:334] "Generic (PLEG): container finished" podID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" containerID="3e931e96f6e2aedcfff665eb7365e1eefdf0716a498ff27779b048b2f5b94d24" exitCode=0 Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.327544 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlwqg" Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.327534 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlwqg" event={"ID":"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b","Type":"ContainerDied","Data":"3e931e96f6e2aedcfff665eb7365e1eefdf0716a498ff27779b048b2f5b94d24"} Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.327667 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlwqg" event={"ID":"db6aaca7-9dda-4eb0-a877-f31bc34bfd8b","Type":"ContainerDied","Data":"d3ec9c05d30908502dfda2e1b5563b60c17649b4d3d458dd7b5ed2c99b09e5ae"} Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.327691 4675 scope.go:117] "RemoveContainer" containerID="3e931e96f6e2aedcfff665eb7365e1eefdf0716a498ff27779b048b2f5b94d24" Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.344391 4675 scope.go:117] "RemoveContainer" containerID="4b8d6dcc481fd380e993661d4818b934cd2c4c0c232b617533a122e14f8acb46" Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.358632 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mlwqg"] Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.364193 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mlwqg"] Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.381409 4675 scope.go:117] "RemoveContainer" containerID="a7d260ee238f45d72f9ac67ca32456a84d57e2ca0592a7beed63cf550d58bc40" Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.394852 4675 scope.go:117] "RemoveContainer" containerID="3e931e96f6e2aedcfff665eb7365e1eefdf0716a498ff27779b048b2f5b94d24" Nov 21 13:47:47 crc kubenswrapper[4675]: E1121 13:47:47.395329 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e931e96f6e2aedcfff665eb7365e1eefdf0716a498ff27779b048b2f5b94d24\": container with ID starting with 3e931e96f6e2aedcfff665eb7365e1eefdf0716a498ff27779b048b2f5b94d24 not found: ID does not exist" containerID="3e931e96f6e2aedcfff665eb7365e1eefdf0716a498ff27779b048b2f5b94d24" Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.395372 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e931e96f6e2aedcfff665eb7365e1eefdf0716a498ff27779b048b2f5b94d24"} err="failed to get container status \"3e931e96f6e2aedcfff665eb7365e1eefdf0716a498ff27779b048b2f5b94d24\": rpc error: code = NotFound desc = could not find container \"3e931e96f6e2aedcfff665eb7365e1eefdf0716a498ff27779b048b2f5b94d24\": container with ID starting with 3e931e96f6e2aedcfff665eb7365e1eefdf0716a498ff27779b048b2f5b94d24 not found: ID does not exist" Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.395428 4675 scope.go:117] "RemoveContainer" containerID="4b8d6dcc481fd380e993661d4818b934cd2c4c0c232b617533a122e14f8acb46" Nov 21 13:47:47 crc kubenswrapper[4675]: E1121 13:47:47.395879 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b8d6dcc481fd380e993661d4818b934cd2c4c0c232b617533a122e14f8acb46\": container with ID starting with 4b8d6dcc481fd380e993661d4818b934cd2c4c0c232b617533a122e14f8acb46 not found: ID does not exist" containerID="4b8d6dcc481fd380e993661d4818b934cd2c4c0c232b617533a122e14f8acb46" Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.395924 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b8d6dcc481fd380e993661d4818b934cd2c4c0c232b617533a122e14f8acb46"} err="failed to get container status \"4b8d6dcc481fd380e993661d4818b934cd2c4c0c232b617533a122e14f8acb46\": rpc error: code = NotFound desc = could not find container \"4b8d6dcc481fd380e993661d4818b934cd2c4c0c232b617533a122e14f8acb46\": container with ID starting with 4b8d6dcc481fd380e993661d4818b934cd2c4c0c232b617533a122e14f8acb46 not found: ID does not exist" Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.395954 4675 scope.go:117] "RemoveContainer" containerID="a7d260ee238f45d72f9ac67ca32456a84d57e2ca0592a7beed63cf550d58bc40" Nov 21 13:47:47 crc kubenswrapper[4675]: E1121 13:47:47.396351 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d260ee238f45d72f9ac67ca32456a84d57e2ca0592a7beed63cf550d58bc40\": container with ID starting with a7d260ee238f45d72f9ac67ca32456a84d57e2ca0592a7beed63cf550d58bc40 not found: ID does not exist" containerID="a7d260ee238f45d72f9ac67ca32456a84d57e2ca0592a7beed63cf550d58bc40" Nov 21 13:47:47 crc kubenswrapper[4675]: I1121 13:47:47.396409 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d260ee238f45d72f9ac67ca32456a84d57e2ca0592a7beed63cf550d58bc40"} err="failed to get container status \"a7d260ee238f45d72f9ac67ca32456a84d57e2ca0592a7beed63cf550d58bc40\": rpc error: code = NotFound desc = could not find container \"a7d260ee238f45d72f9ac67ca32456a84d57e2ca0592a7beed63cf550d58bc40\": container with ID starting with a7d260ee238f45d72f9ac67ca32456a84d57e2ca0592a7beed63cf550d58bc40 not found: ID does not exist" Nov 21 13:47:48 crc kubenswrapper[4675]: I1121 13:47:48.856673 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" path="/var/lib/kubelet/pods/db6aaca7-9dda-4eb0-a877-f31bc34bfd8b/volumes" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.647529 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9"] Nov 21 13:48:09 crc kubenswrapper[4675]: E1121 13:48:09.648234 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" containerName="registry-server" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.648247 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" containerName="registry-server" Nov 21 13:48:09 crc kubenswrapper[4675]: E1121 13:48:09.648262 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" containerName="extract-utilities" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.648268 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" containerName="extract-utilities" Nov 21 13:48:09 crc kubenswrapper[4675]: E1121 13:48:09.648280 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" containerName="extract-content" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.648286 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" containerName="extract-content" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.648394 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="db6aaca7-9dda-4eb0-a877-f31bc34bfd8b" containerName="registry-server" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.649252 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.651952 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.658850 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9"] Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.789742 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv"] Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.791031 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.804200 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv"] Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.804550 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c495bdbc-57d1-4c92-8276-2769d303f189-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9\" (UID: \"c495bdbc-57d1-4c92-8276-2769d303f189\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.804779 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c495bdbc-57d1-4c92-8276-2769d303f189-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9\" (UID: \"c495bdbc-57d1-4c92-8276-2769d303f189\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.804830 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k7vg\" (UniqueName: \"kubernetes.io/projected/c495bdbc-57d1-4c92-8276-2769d303f189-kube-api-access-5k7vg\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9\" (UID: \"c495bdbc-57d1-4c92-8276-2769d303f189\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.905787 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv\" (UID: \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.905836 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c495bdbc-57d1-4c92-8276-2769d303f189-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9\" (UID: \"c495bdbc-57d1-4c92-8276-2769d303f189\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.905938 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k7vg\" (UniqueName: \"kubernetes.io/projected/c495bdbc-57d1-4c92-8276-2769d303f189-kube-api-access-5k7vg\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9\" (UID: \"c495bdbc-57d1-4c92-8276-2769d303f189\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.906037 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c495bdbc-57d1-4c92-8276-2769d303f189-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9\" (UID: \"c495bdbc-57d1-4c92-8276-2769d303f189\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.906113 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv\" (UID: \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.906220 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt8nf\" (UniqueName: \"kubernetes.io/projected/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-kube-api-access-bt8nf\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv\" (UID: \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.906530 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c495bdbc-57d1-4c92-8276-2769d303f189-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9\" (UID: \"c495bdbc-57d1-4c92-8276-2769d303f189\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.906565 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c495bdbc-57d1-4c92-8276-2769d303f189-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9\" (UID: \"c495bdbc-57d1-4c92-8276-2769d303f189\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.924073 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k7vg\" (UniqueName: \"kubernetes.io/projected/c495bdbc-57d1-4c92-8276-2769d303f189-kube-api-access-5k7vg\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9\" (UID: \"c495bdbc-57d1-4c92-8276-2769d303f189\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" Nov 21 13:48:09 crc kubenswrapper[4675]: I1121 13:48:09.963534 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" Nov 21 13:48:10 crc kubenswrapper[4675]: I1121 13:48:10.007319 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv\" (UID: \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" Nov 21 13:48:10 crc kubenswrapper[4675]: I1121 13:48:10.007676 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt8nf\" (UniqueName: \"kubernetes.io/projected/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-kube-api-access-bt8nf\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv\" (UID: \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" Nov 21 13:48:10 crc kubenswrapper[4675]: I1121 13:48:10.007757 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv\" (UID: \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" Nov 21 13:48:10 crc kubenswrapper[4675]: I1121 13:48:10.008144 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv\" (UID: \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" Nov 21 13:48:10 crc kubenswrapper[4675]: I1121 13:48:10.008154 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv\" (UID: \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" Nov 21 13:48:10 crc kubenswrapper[4675]: I1121 13:48:10.026526 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt8nf\" (UniqueName: \"kubernetes.io/projected/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-kube-api-access-bt8nf\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv\" (UID: \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" Nov 21 13:48:10 crc kubenswrapper[4675]: I1121 13:48:10.105253 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" Nov 21 13:48:10 crc kubenswrapper[4675]: I1121 13:48:10.319798 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv"] Nov 21 13:48:10 crc kubenswrapper[4675]: I1121 13:48:10.368346 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9"] Nov 21 13:48:10 crc kubenswrapper[4675]: W1121 13:48:10.381926 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc495bdbc_57d1_4c92_8276_2769d303f189.slice/crio-e8d6edeae5756f99b8e0e4606dbd80403cdb5d060a8a685a5ba848194b124546 WatchSource:0}: Error finding container e8d6edeae5756f99b8e0e4606dbd80403cdb5d060a8a685a5ba848194b124546: Status 404 returned error can't find the container with id e8d6edeae5756f99b8e0e4606dbd80403cdb5d060a8a685a5ba848194b124546 Nov 21 13:48:10 crc kubenswrapper[4675]: I1121 13:48:10.466444 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" event={"ID":"c495bdbc-57d1-4c92-8276-2769d303f189","Type":"ContainerStarted","Data":"e8d6edeae5756f99b8e0e4606dbd80403cdb5d060a8a685a5ba848194b124546"} Nov 21 13:48:10 crc kubenswrapper[4675]: I1121 13:48:10.467528 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" event={"ID":"7fcb22a1-888b-416d-b2fe-22eb8cdc928b","Type":"ContainerStarted","Data":"f3ab71c0c3ba3a9c76d6d413a9a2b5b398139790812b298e360a76213706ef41"} Nov 21 13:48:11 crc kubenswrapper[4675]: I1121 13:48:11.473490 4675 generic.go:334] "Generic (PLEG): container finished" podID="7fcb22a1-888b-416d-b2fe-22eb8cdc928b" containerID="7b95df238a276249761e2676299f19ba5f6c020c56baeb8fee47b6242c086399" exitCode=0 Nov 21 13:48:11 crc kubenswrapper[4675]: I1121 13:48:11.473598 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" event={"ID":"7fcb22a1-888b-416d-b2fe-22eb8cdc928b","Type":"ContainerDied","Data":"7b95df238a276249761e2676299f19ba5f6c020c56baeb8fee47b6242c086399"} Nov 21 13:48:11 crc kubenswrapper[4675]: I1121 13:48:11.475173 4675 generic.go:334] "Generic (PLEG): container finished" podID="c495bdbc-57d1-4c92-8276-2769d303f189" containerID="6e95b648db6a35c3977099f74bf8687b590f147dfd57c0b5bfb1442fff87f601" exitCode=0 Nov 21 13:48:11 crc kubenswrapper[4675]: I1121 13:48:11.475215 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" event={"ID":"c495bdbc-57d1-4c92-8276-2769d303f189","Type":"ContainerDied","Data":"6e95b648db6a35c3977099f74bf8687b590f147dfd57c0b5bfb1442fff87f601"} Nov 21 13:48:14 crc kubenswrapper[4675]: I1121 13:48:14.496247 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" event={"ID":"7fcb22a1-888b-416d-b2fe-22eb8cdc928b","Type":"ContainerStarted","Data":"8fdd9f92f7618aa7a1161177a29603ea5d5429eac7708688625747b3bfbb3532"} Nov 21 13:48:15 crc kubenswrapper[4675]: I1121 13:48:15.512545 4675 generic.go:334] "Generic (PLEG): container finished" podID="c495bdbc-57d1-4c92-8276-2769d303f189" containerID="ccc98df5db1d5b3f56108b6d3241e234e65db6b13dab8a0175a1d6dfd2136e00" exitCode=0 Nov 21 13:48:15 crc kubenswrapper[4675]: I1121 13:48:15.512647 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" event={"ID":"c495bdbc-57d1-4c92-8276-2769d303f189","Type":"ContainerDied","Data":"ccc98df5db1d5b3f56108b6d3241e234e65db6b13dab8a0175a1d6dfd2136e00"} Nov 21 13:48:15 crc kubenswrapper[4675]: I1121 13:48:15.522971 4675 generic.go:334] "Generic (PLEG): container finished" podID="7fcb22a1-888b-416d-b2fe-22eb8cdc928b" containerID="8fdd9f92f7618aa7a1161177a29603ea5d5429eac7708688625747b3bfbb3532" exitCode=0 Nov 21 13:48:15 crc kubenswrapper[4675]: I1121 13:48:15.523014 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" event={"ID":"7fcb22a1-888b-416d-b2fe-22eb8cdc928b","Type":"ContainerDied","Data":"8fdd9f92f7618aa7a1161177a29603ea5d5429eac7708688625747b3bfbb3532"} Nov 21 13:48:16 crc kubenswrapper[4675]: I1121 13:48:16.137146 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:48:16 crc kubenswrapper[4675]: I1121 13:48:16.137921 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:48:16 crc kubenswrapper[4675]: I1121 13:48:16.532483 4675 generic.go:334] "Generic (PLEG): container finished" podID="c495bdbc-57d1-4c92-8276-2769d303f189" containerID="ec34ad701bf7ca2c2c00e8f6933ae45889890a42772dfb23840d17fcb2198fd4" exitCode=0 Nov 21 13:48:16 crc kubenswrapper[4675]: I1121 13:48:16.532566 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" event={"ID":"c495bdbc-57d1-4c92-8276-2769d303f189","Type":"ContainerDied","Data":"ec34ad701bf7ca2c2c00e8f6933ae45889890a42772dfb23840d17fcb2198fd4"} Nov 21 13:48:16 crc kubenswrapper[4675]: I1121 13:48:16.535688 4675 generic.go:334] "Generic (PLEG): container finished" podID="7fcb22a1-888b-416d-b2fe-22eb8cdc928b" containerID="a0acdcb3d6bae783399bc075b7b78673fa8c3a25bbfd6dc3d86800a31d38a1c6" exitCode=0 Nov 21 13:48:16 crc kubenswrapper[4675]: I1121 13:48:16.535727 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" event={"ID":"7fcb22a1-888b-416d-b2fe-22eb8cdc928b","Type":"ContainerDied","Data":"a0acdcb3d6bae783399bc075b7b78673fa8c3a25bbfd6dc3d86800a31d38a1c6"} Nov 21 13:48:17 crc kubenswrapper[4675]: I1121 13:48:17.995713 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" Nov 21 13:48:17 crc kubenswrapper[4675]: I1121 13:48:17.999048 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.183425 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt8nf\" (UniqueName: \"kubernetes.io/projected/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-kube-api-access-bt8nf\") pod \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\" (UID: \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\") " Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.183493 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-bundle\") pod \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\" (UID: \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\") " Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.183576 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-util\") pod \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\" (UID: \"7fcb22a1-888b-416d-b2fe-22eb8cdc928b\") " Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.183611 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c495bdbc-57d1-4c92-8276-2769d303f189-bundle\") pod \"c495bdbc-57d1-4c92-8276-2769d303f189\" (UID: \"c495bdbc-57d1-4c92-8276-2769d303f189\") " Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.183703 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k7vg\" (UniqueName: \"kubernetes.io/projected/c495bdbc-57d1-4c92-8276-2769d303f189-kube-api-access-5k7vg\") pod \"c495bdbc-57d1-4c92-8276-2769d303f189\" (UID: \"c495bdbc-57d1-4c92-8276-2769d303f189\") " Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.183731 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c495bdbc-57d1-4c92-8276-2769d303f189-util\") pod \"c495bdbc-57d1-4c92-8276-2769d303f189\" (UID: \"c495bdbc-57d1-4c92-8276-2769d303f189\") " Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.184871 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c495bdbc-57d1-4c92-8276-2769d303f189-bundle" (OuterVolumeSpecName: "bundle") pod "c495bdbc-57d1-4c92-8276-2769d303f189" (UID: "c495bdbc-57d1-4c92-8276-2769d303f189"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.185999 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-bundle" (OuterVolumeSpecName: "bundle") pod "7fcb22a1-888b-416d-b2fe-22eb8cdc928b" (UID: "7fcb22a1-888b-416d-b2fe-22eb8cdc928b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.192583 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-kube-api-access-bt8nf" (OuterVolumeSpecName: "kube-api-access-bt8nf") pod "7fcb22a1-888b-416d-b2fe-22eb8cdc928b" (UID: "7fcb22a1-888b-416d-b2fe-22eb8cdc928b"). InnerVolumeSpecName "kube-api-access-bt8nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.196825 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-util" (OuterVolumeSpecName: "util") pod "7fcb22a1-888b-416d-b2fe-22eb8cdc928b" (UID: "7fcb22a1-888b-416d-b2fe-22eb8cdc928b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.197227 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c495bdbc-57d1-4c92-8276-2769d303f189-kube-api-access-5k7vg" (OuterVolumeSpecName: "kube-api-access-5k7vg") pod "c495bdbc-57d1-4c92-8276-2769d303f189" (UID: "c495bdbc-57d1-4c92-8276-2769d303f189"). InnerVolumeSpecName "kube-api-access-5k7vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.197948 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c495bdbc-57d1-4c92-8276-2769d303f189-util" (OuterVolumeSpecName: "util") pod "c495bdbc-57d1-4c92-8276-2769d303f189" (UID: "c495bdbc-57d1-4c92-8276-2769d303f189"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.285761 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt8nf\" (UniqueName: \"kubernetes.io/projected/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-kube-api-access-bt8nf\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.285840 4675 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.285870 4675 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fcb22a1-888b-416d-b2fe-22eb8cdc928b-util\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.285895 4675 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c495bdbc-57d1-4c92-8276-2769d303f189-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.285920 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k7vg\" (UniqueName: \"kubernetes.io/projected/c495bdbc-57d1-4c92-8276-2769d303f189-kube-api-access-5k7vg\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.285942 4675 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c495bdbc-57d1-4c92-8276-2769d303f189-util\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.553652 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" event={"ID":"7fcb22a1-888b-416d-b2fe-22eb8cdc928b","Type":"ContainerDied","Data":"f3ab71c0c3ba3a9c76d6d413a9a2b5b398139790812b298e360a76213706ef41"} Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.553693 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3ab71c0c3ba3a9c76d6d413a9a2b5b398139790812b298e360a76213706ef41" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.553673 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.555922 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" event={"ID":"c495bdbc-57d1-4c92-8276-2769d303f189","Type":"ContainerDied","Data":"e8d6edeae5756f99b8e0e4606dbd80403cdb5d060a8a685a5ba848194b124546"} Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.556009 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8d6edeae5756f99b8e0e4606dbd80403cdb5d060a8a685a5ba848194b124546" Nov 21 13:48:18 crc kubenswrapper[4675]: I1121 13:48:18.556055 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.346607 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67"] Nov 21 13:48:28 crc kubenswrapper[4675]: E1121 13:48:28.348718 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c495bdbc-57d1-4c92-8276-2769d303f189" containerName="extract" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.348822 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c495bdbc-57d1-4c92-8276-2769d303f189" containerName="extract" Nov 21 13:48:28 crc kubenswrapper[4675]: E1121 13:48:28.348905 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fcb22a1-888b-416d-b2fe-22eb8cdc928b" containerName="extract" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.348977 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fcb22a1-888b-416d-b2fe-22eb8cdc928b" containerName="extract" Nov 21 13:48:28 crc kubenswrapper[4675]: E1121 13:48:28.349125 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fcb22a1-888b-416d-b2fe-22eb8cdc928b" containerName="pull" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.349206 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fcb22a1-888b-416d-b2fe-22eb8cdc928b" containerName="pull" Nov 21 13:48:28 crc kubenswrapper[4675]: E1121 13:48:28.349280 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c495bdbc-57d1-4c92-8276-2769d303f189" containerName="util" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.349348 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c495bdbc-57d1-4c92-8276-2769d303f189" containerName="util" Nov 21 13:48:28 crc kubenswrapper[4675]: E1121 13:48:28.349433 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c495bdbc-57d1-4c92-8276-2769d303f189" containerName="pull" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.349508 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c495bdbc-57d1-4c92-8276-2769d303f189" containerName="pull" Nov 21 13:48:28 crc kubenswrapper[4675]: E1121 13:48:28.349601 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fcb22a1-888b-416d-b2fe-22eb8cdc928b" containerName="util" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.349679 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fcb22a1-888b-416d-b2fe-22eb8cdc928b" containerName="util" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.349901 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c495bdbc-57d1-4c92-8276-2769d303f189" containerName="extract" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.350007 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fcb22a1-888b-416d-b2fe-22eb8cdc928b" containerName="extract" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.350941 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.353404 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.353618 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-zqwvw" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.354238 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.354502 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.354734 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.354741 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.376461 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67"] Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.517856 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6xsb\" (UniqueName: \"kubernetes.io/projected/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-kube-api-access-x6xsb\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.517918 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-apiservice-cert\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.517984 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-manager-config\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.518044 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.518094 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-webhook-cert\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.619258 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.619317 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-webhook-cert\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.619377 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6xsb\" (UniqueName: \"kubernetes.io/projected/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-kube-api-access-x6xsb\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.619400 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-apiservice-cert\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.619465 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-manager-config\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.620407 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-manager-config\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.627585 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-webhook-cert\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.630658 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-apiservice-cert\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.631840 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.657023 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6xsb\" (UniqueName: \"kubernetes.io/projected/e7c19bc7-9927-4cb7-98e2-2f834e3ff496-kube-api-access-x6xsb\") pod \"loki-operator-controller-manager-58d5765bd4-29h67\" (UID: \"e7c19bc7-9927-4cb7-98e2-2f834e3ff496\") " pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:28 crc kubenswrapper[4675]: I1121 13:48:28.684546 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:29 crc kubenswrapper[4675]: I1121 13:48:29.003588 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67"] Nov 21 13:48:29 crc kubenswrapper[4675]: I1121 13:48:29.640376 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" event={"ID":"e7c19bc7-9927-4cb7-98e2-2f834e3ff496","Type":"ContainerStarted","Data":"5433bdce38a1e9ff369fae3f5f57f326b30ef29c19634c01621d35981bb56db0"} Nov 21 13:48:29 crc kubenswrapper[4675]: I1121 13:48:29.763828 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-5blzb"] Nov 21 13:48:29 crc kubenswrapper[4675]: I1121 13:48:29.765228 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-5blzb" Nov 21 13:48:29 crc kubenswrapper[4675]: I1121 13:48:29.767110 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Nov 21 13:48:29 crc kubenswrapper[4675]: I1121 13:48:29.767235 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-7ndbp" Nov 21 13:48:29 crc kubenswrapper[4675]: I1121 13:48:29.775364 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Nov 21 13:48:29 crc kubenswrapper[4675]: I1121 13:48:29.777610 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-5blzb"] Nov 21 13:48:29 crc kubenswrapper[4675]: I1121 13:48:29.845579 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt8rr\" (UniqueName: \"kubernetes.io/projected/91cca7cf-0a78-48e9-80ca-7d7c7e93d0da-kube-api-access-zt8rr\") pod \"cluster-logging-operator-ff9846bd-5blzb\" (UID: \"91cca7cf-0a78-48e9-80ca-7d7c7e93d0da\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-5blzb" Nov 21 13:48:29 crc kubenswrapper[4675]: I1121 13:48:29.946541 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8rr\" (UniqueName: \"kubernetes.io/projected/91cca7cf-0a78-48e9-80ca-7d7c7e93d0da-kube-api-access-zt8rr\") pod \"cluster-logging-operator-ff9846bd-5blzb\" (UID: \"91cca7cf-0a78-48e9-80ca-7d7c7e93d0da\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-5blzb" Nov 21 13:48:29 crc kubenswrapper[4675]: I1121 13:48:29.967139 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt8rr\" (UniqueName: \"kubernetes.io/projected/91cca7cf-0a78-48e9-80ca-7d7c7e93d0da-kube-api-access-zt8rr\") pod \"cluster-logging-operator-ff9846bd-5blzb\" (UID: \"91cca7cf-0a78-48e9-80ca-7d7c7e93d0da\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-5blzb" Nov 21 13:48:30 crc kubenswrapper[4675]: I1121 13:48:30.084795 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-5blzb" Nov 21 13:48:30 crc kubenswrapper[4675]: I1121 13:48:30.283566 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-5blzb"] Nov 21 13:48:30 crc kubenswrapper[4675]: W1121 13:48:30.303741 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91cca7cf_0a78_48e9_80ca_7d7c7e93d0da.slice/crio-3275572aa0637baec6bd1ab2d6fefb3443240c46bfb479cde778aaa4f7480a14 WatchSource:0}: Error finding container 3275572aa0637baec6bd1ab2d6fefb3443240c46bfb479cde778aaa4f7480a14: Status 404 returned error can't find the container with id 3275572aa0637baec6bd1ab2d6fefb3443240c46bfb479cde778aaa4f7480a14 Nov 21 13:48:30 crc kubenswrapper[4675]: I1121 13:48:30.647372 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-5blzb" event={"ID":"91cca7cf-0a78-48e9-80ca-7d7c7e93d0da","Type":"ContainerStarted","Data":"3275572aa0637baec6bd1ab2d6fefb3443240c46bfb479cde778aaa4f7480a14"} Nov 21 13:48:36 crc kubenswrapper[4675]: I1121 13:48:36.695726 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-5blzb" event={"ID":"91cca7cf-0a78-48e9-80ca-7d7c7e93d0da","Type":"ContainerStarted","Data":"b7dddcb7f3352aa34dcdedd5567efd01f80a638c46b1799bb049e8afb06a0663"} Nov 21 13:48:36 crc kubenswrapper[4675]: I1121 13:48:36.698932 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" event={"ID":"e7c19bc7-9927-4cb7-98e2-2f834e3ff496","Type":"ContainerStarted","Data":"32d9ea8d3a55a3b5b1514c1dbeaa28f2778b5ab5998d032537ea300f18749f02"} Nov 21 13:48:36 crc kubenswrapper[4675]: I1121 13:48:36.722319 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-5blzb" podStartSLOduration=2.014104989 podStartE2EDuration="7.722298509s" podCreationTimestamp="2025-11-21 13:48:29 +0000 UTC" firstStartedPulling="2025-11-21 13:48:30.310159204 +0000 UTC m=+987.036573931" lastFinishedPulling="2025-11-21 13:48:36.018352724 +0000 UTC m=+992.744767451" observedRunningTime="2025-11-21 13:48:36.716000579 +0000 UTC m=+993.442415316" watchObservedRunningTime="2025-11-21 13:48:36.722298509 +0000 UTC m=+993.448713236" Nov 21 13:48:44 crc kubenswrapper[4675]: I1121 13:48:44.762770 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" event={"ID":"e7c19bc7-9927-4cb7-98e2-2f834e3ff496","Type":"ContainerStarted","Data":"9e2928f7ae0f4737656df794a504868085983c49a0068a73ed7d47ffd60f289d"} Nov 21 13:48:44 crc kubenswrapper[4675]: I1121 13:48:44.763422 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:44 crc kubenswrapper[4675]: I1121 13:48:44.765459 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" Nov 21 13:48:44 crc kubenswrapper[4675]: I1121 13:48:44.783419 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-58d5765bd4-29h67" podStartSLOduration=1.7352823480000001 podStartE2EDuration="16.783398096s" podCreationTimestamp="2025-11-21 13:48:28 +0000 UTC" firstStartedPulling="2025-11-21 13:48:29.014224587 +0000 UTC m=+985.740639314" lastFinishedPulling="2025-11-21 13:48:44.062340325 +0000 UTC m=+1000.788755062" observedRunningTime="2025-11-21 13:48:44.779213019 +0000 UTC m=+1001.505627756" watchObservedRunningTime="2025-11-21 13:48:44.783398096 +0000 UTC m=+1001.509812833" Nov 21 13:48:46 crc kubenswrapper[4675]: I1121 13:48:46.136445 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:48:46 crc kubenswrapper[4675]: I1121 13:48:46.136752 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.472675 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.474206 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.476670 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.477747 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.483700 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.637051 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dafda537-25dd-4873-a09e-10ffc722425f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dafda537-25dd-4873-a09e-10ffc722425f\") pod \"minio\" (UID: \"c9537146-7008-4245-aac9-eb7fcbf13622\") " pod="minio-dev/minio" Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.637210 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jdcl\" (UniqueName: \"kubernetes.io/projected/c9537146-7008-4245-aac9-eb7fcbf13622-kube-api-access-2jdcl\") pod \"minio\" (UID: \"c9537146-7008-4245-aac9-eb7fcbf13622\") " pod="minio-dev/minio" Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.738620 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dafda537-25dd-4873-a09e-10ffc722425f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dafda537-25dd-4873-a09e-10ffc722425f\") pod \"minio\" (UID: \"c9537146-7008-4245-aac9-eb7fcbf13622\") " pod="minio-dev/minio" Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.738857 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jdcl\" (UniqueName: \"kubernetes.io/projected/c9537146-7008-4245-aac9-eb7fcbf13622-kube-api-access-2jdcl\") pod \"minio\" (UID: \"c9537146-7008-4245-aac9-eb7fcbf13622\") " pod="minio-dev/minio" Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.742716 4675 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.742774 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dafda537-25dd-4873-a09e-10ffc722425f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dafda537-25dd-4873-a09e-10ffc722425f\") pod \"minio\" (UID: \"c9537146-7008-4245-aac9-eb7fcbf13622\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c2033cea20417c7547b4dba64aa70ee664964175e73f4926f4740cbc0f58c884/globalmount\"" pod="minio-dev/minio" Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.777211 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jdcl\" (UniqueName: \"kubernetes.io/projected/c9537146-7008-4245-aac9-eb7fcbf13622-kube-api-access-2jdcl\") pod \"minio\" (UID: \"c9537146-7008-4245-aac9-eb7fcbf13622\") " pod="minio-dev/minio" Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.782059 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dafda537-25dd-4873-a09e-10ffc722425f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dafda537-25dd-4873-a09e-10ffc722425f\") pod \"minio\" (UID: \"c9537146-7008-4245-aac9-eb7fcbf13622\") " pod="minio-dev/minio" Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.792232 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 21 13:48:49 crc kubenswrapper[4675]: I1121 13:48:49.992823 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 21 13:48:50 crc kubenswrapper[4675]: W1121 13:48:50.000422 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9537146_7008_4245_aac9_eb7fcbf13622.slice/crio-2dce45149a30a969bc90c0e5b74b0855e5f699e2f2b6f3772bc7d9690ead72f0 WatchSource:0}: Error finding container 2dce45149a30a969bc90c0e5b74b0855e5f699e2f2b6f3772bc7d9690ead72f0: Status 404 returned error can't find the container with id 2dce45149a30a969bc90c0e5b74b0855e5f699e2f2b6f3772bc7d9690ead72f0 Nov 21 13:48:50 crc kubenswrapper[4675]: I1121 13:48:50.820684 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"c9537146-7008-4245-aac9-eb7fcbf13622","Type":"ContainerStarted","Data":"2dce45149a30a969bc90c0e5b74b0855e5f699e2f2b6f3772bc7d9690ead72f0"} Nov 21 13:48:54 crc kubenswrapper[4675]: I1121 13:48:54.848194 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"c9537146-7008-4245-aac9-eb7fcbf13622","Type":"ContainerStarted","Data":"c9acd47b5e1428df5618db80be7c6be923870ac1e788a9fecec5e3e8a7f1449e"} Nov 21 13:48:54 crc kubenswrapper[4675]: I1121 13:48:54.866342 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=3.690687508 podStartE2EDuration="7.866318473s" podCreationTimestamp="2025-11-21 13:48:47 +0000 UTC" firstStartedPulling="2025-11-21 13:48:50.003647374 +0000 UTC m=+1006.730062101" lastFinishedPulling="2025-11-21 13:48:54.179278339 +0000 UTC m=+1010.905693066" observedRunningTime="2025-11-21 13:48:54.861395438 +0000 UTC m=+1011.587810165" watchObservedRunningTime="2025-11-21 13:48:54.866318473 +0000 UTC m=+1011.592733210" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.673945 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-czh7n"] Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.675331 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.678292 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-m7jnz" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.678489 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.678802 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.678950 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.681731 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.690943 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-czh7n"] Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.772140 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/15313a35-1860-458c-9520-8eb44937ad1d-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.772221 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15313a35-1860-458c-9520-8eb44937ad1d-config\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.772255 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll56f\" (UniqueName: \"kubernetes.io/projected/15313a35-1860-458c-9520-8eb44937ad1d-kube-api-access-ll56f\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.772305 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15313a35-1860-458c-9520-8eb44937ad1d-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.772362 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/15313a35-1860-458c-9520-8eb44937ad1d-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.865771 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-tdn52"] Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.866682 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.871380 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.871557 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.871603 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.879435 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/15313a35-1860-458c-9520-8eb44937ad1d-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.879529 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/15313a35-1860-458c-9520-8eb44937ad1d-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.879593 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15313a35-1860-458c-9520-8eb44937ad1d-config\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.879634 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll56f\" (UniqueName: \"kubernetes.io/projected/15313a35-1860-458c-9520-8eb44937ad1d-kube-api-access-ll56f\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.879701 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15313a35-1860-458c-9520-8eb44937ad1d-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.880962 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15313a35-1860-458c-9520-8eb44937ad1d-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.881052 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15313a35-1860-458c-9520-8eb44937ad1d-config\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.890291 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/15313a35-1860-458c-9520-8eb44937ad1d-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.895524 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-tdn52"] Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.906944 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/15313a35-1860-458c-9520-8eb44937ad1d-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.908969 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll56f\" (UniqueName: \"kubernetes.io/projected/15313a35-1860-458c-9520-8eb44937ad1d-kube-api-access-ll56f\") pod \"logging-loki-distributor-76cc67bf56-czh7n\" (UID: \"15313a35-1860-458c-9520-8eb44937ad1d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.939662 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq"] Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.946982 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.950927 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.951435 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.973620 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq"] Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.981375 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/c804a918-f222-49f2-87b7-14b0dae0d37f-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.981686 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-config\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.981807 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5ddm\" (UniqueName: \"kubernetes.io/projected/c804a918-f222-49f2-87b7-14b0dae0d37f-kube-api-access-t5ddm\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.981949 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c804a918-f222-49f2-87b7-14b0dae0d37f-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.982132 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.982280 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.982551 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/c804a918-f222-49f2-87b7-14b0dae0d37f-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.982656 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.982686 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqbxb\" (UniqueName: \"kubernetes.io/projected/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-kube-api-access-gqbxb\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.982745 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c804a918-f222-49f2-87b7-14b0dae0d37f-config\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.982970 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:58 crc kubenswrapper[4675]: I1121 13:48:58.998458 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.086830 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/c804a918-f222-49f2-87b7-14b0dae0d37f-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.086893 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-config\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.086918 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5ddm\" (UniqueName: \"kubernetes.io/projected/c804a918-f222-49f2-87b7-14b0dae0d37f-kube-api-access-t5ddm\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.086956 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c804a918-f222-49f2-87b7-14b0dae0d37f-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.086989 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.087009 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.087035 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/c804a918-f222-49f2-87b7-14b0dae0d37f-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.087056 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.087090 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqbxb\" (UniqueName: \"kubernetes.io/projected/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-kube-api-access-gqbxb\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.087109 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c804a918-f222-49f2-87b7-14b0dae0d37f-config\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.087150 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.093011 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c804a918-f222-49f2-87b7-14b0dae0d37f-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.095004 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.098872 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-config\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.101505 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c804a918-f222-49f2-87b7-14b0dae0d37f-config\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.102657 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.112268 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/c804a918-f222-49f2-87b7-14b0dae0d37f-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.124797 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft"] Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.138966 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.145228 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/c804a918-f222-49f2-87b7-14b0dae0d37f-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.146012 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.146223 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.146521 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.146725 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.146955 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.150245 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.151316 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-7n2dv" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.151861 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft"] Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.152543 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.158248 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqbxb\" (UniqueName: \"kubernetes.io/projected/fcc0cd18-60e3-4d70-8504-a0987a0cea4e-kube-api-access-gqbxb\") pod \"logging-loki-querier-5895d59bb8-tdn52\" (UID: \"fcc0cd18-60e3-4d70-8504-a0987a0cea4e\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.163739 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5ddm\" (UniqueName: \"kubernetes.io/projected/c804a918-f222-49f2-87b7-14b0dae0d37f-kube-api-access-t5ddm\") pod \"logging-loki-query-frontend-84558f7c9f-65gdq\" (UID: \"c804a918-f222-49f2-87b7-14b0dae0d37f\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.178481 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k"] Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.182015 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.193025 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.193101 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.193127 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-rbac\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.193417 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-lokistack-gateway\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.193545 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.193578 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bblqg\" (UniqueName: \"kubernetes.io/projected/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-kube-api-access-bblqg\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.193626 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-tls-secret\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.193656 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-tenants\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.198661 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k"] Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.252681 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.267132 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.295393 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d8711ff7-1164-4f51-9748-d563536a90d3-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.295446 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d8711ff7-1164-4f51-9748-d563536a90d3-tls-secret\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.295478 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d8711ff7-1164-4f51-9748-d563536a90d3-lokistack-gateway\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.295497 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8711ff7-1164-4f51-9748-d563536a90d3-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.295525 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.295544 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.295562 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-rbac\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.296018 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-lokistack-gateway\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.296089 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8711ff7-1164-4f51-9748-d563536a90d3-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.296117 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsch6\" (UniqueName: \"kubernetes.io/projected/d8711ff7-1164-4f51-9748-d563536a90d3-kube-api-access-lsch6\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.296136 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.296157 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bblqg\" (UniqueName: \"kubernetes.io/projected/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-kube-api-access-bblqg\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.296174 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d8711ff7-1164-4f51-9748-d563536a90d3-tenants\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.296218 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d8711ff7-1164-4f51-9748-d563536a90d3-rbac\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.296285 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-tls-secret\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.296445 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-tenants\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: E1121 13:48:59.299243 4675 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Nov 21 13:48:59 crc kubenswrapper[4675]: E1121 13:48:59.299355 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-tls-secret podName:3c26c6d8-717f-4d7d-9a42-bdb65213fe5c nodeName:}" failed. No retries permitted until 2025-11-21 13:48:59.799333096 +0000 UTC m=+1016.525747833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-tls-secret") pod "logging-loki-gateway-6b7bc6b4d8-pflft" (UID: "3c26c6d8-717f-4d7d-9a42-bdb65213fe5c") : secret "logging-loki-gateway-http" not found Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.300506 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.300608 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-rbac\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.302246 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-tenants\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.302348 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.302501 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.303179 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-lokistack-gateway\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.320234 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bblqg\" (UniqueName: \"kubernetes.io/projected/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-kube-api-access-bblqg\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.398164 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8711ff7-1164-4f51-9748-d563536a90d3-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.398231 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsch6\" (UniqueName: \"kubernetes.io/projected/d8711ff7-1164-4f51-9748-d563536a90d3-kube-api-access-lsch6\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.398279 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d8711ff7-1164-4f51-9748-d563536a90d3-tenants\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.398305 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d8711ff7-1164-4f51-9748-d563536a90d3-rbac\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.398390 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d8711ff7-1164-4f51-9748-d563536a90d3-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.398426 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d8711ff7-1164-4f51-9748-d563536a90d3-tls-secret\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.398446 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d8711ff7-1164-4f51-9748-d563536a90d3-lokistack-gateway\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.398467 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8711ff7-1164-4f51-9748-d563536a90d3-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.399388 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8711ff7-1164-4f51-9748-d563536a90d3-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.399513 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8711ff7-1164-4f51-9748-d563536a90d3-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: E1121 13:48:59.400525 4675 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Nov 21 13:48:59 crc kubenswrapper[4675]: E1121 13:48:59.400595 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8711ff7-1164-4f51-9748-d563536a90d3-tls-secret podName:d8711ff7-1164-4f51-9748-d563536a90d3 nodeName:}" failed. No retries permitted until 2025-11-21 13:48:59.900576265 +0000 UTC m=+1016.626991472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/d8711ff7-1164-4f51-9748-d563536a90d3-tls-secret") pod "logging-loki-gateway-6b7bc6b4d8-mpc5k" (UID: "d8711ff7-1164-4f51-9748-d563536a90d3") : secret "logging-loki-gateway-http" not found Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.400910 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d8711ff7-1164-4f51-9748-d563536a90d3-rbac\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.401591 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d8711ff7-1164-4f51-9748-d563536a90d3-lokistack-gateway\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.402777 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d8711ff7-1164-4f51-9748-d563536a90d3-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.404266 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d8711ff7-1164-4f51-9748-d563536a90d3-tenants\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.446455 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsch6\" (UniqueName: \"kubernetes.io/projected/d8711ff7-1164-4f51-9748-d563536a90d3-kube-api-access-lsch6\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.532152 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-czh7n"] Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.725211 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-tdn52"] Nov 21 13:48:59 crc kubenswrapper[4675]: W1121 13:48:59.732529 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcc0cd18_60e3_4d70_8504_a0987a0cea4e.slice/crio-ed121116d4622581e348b61beea812e31ed0ae909dd1395fdf8750e9af165210 WatchSource:0}: Error finding container ed121116d4622581e348b61beea812e31ed0ae909dd1395fdf8750e9af165210: Status 404 returned error can't find the container with id ed121116d4622581e348b61beea812e31ed0ae909dd1395fdf8750e9af165210 Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.805580 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-tls-secret\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.812323 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3c26c6d8-717f-4d7d-9a42-bdb65213fe5c-tls-secret\") pod \"logging-loki-gateway-6b7bc6b4d8-pflft\" (UID: \"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.834402 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.835505 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.837877 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.839369 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.848826 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.871925 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq"] Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.891935 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" event={"ID":"15313a35-1860-458c-9520-8eb44937ad1d","Type":"ContainerStarted","Data":"0e28355ae20263c2f2d6f9434af2284e850c7f97a273d68a9528c590e06a6a93"} Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.893315 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" event={"ID":"c804a918-f222-49f2-87b7-14b0dae0d37f","Type":"ContainerStarted","Data":"2ae611110ba7bb299be98b823d5fd822df8a8868b110b06414071b59c8ac3ff2"} Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.898657 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" event={"ID":"fcc0cd18-60e3-4d70-8504-a0987a0cea4e","Type":"ContainerStarted","Data":"ed121116d4622581e348b61beea812e31ed0ae909dd1395fdf8750e9af165210"} Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.906994 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d8711ff7-1164-4f51-9748-d563536a90d3-tls-secret\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.907105 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.907137 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.907168 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.907198 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.907230 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jhfb\" (UniqueName: \"kubernetes.io/projected/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-kube-api-access-8jhfb\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.907269 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-config\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.907303 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-837c80dd-a367-4cd1-ac94-13159f73fce1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-837c80dd-a367-4cd1-ac94-13159f73fce1\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.907348 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-96ca6582-8d94-441e-8ad4-2225713874e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96ca6582-8d94-441e-8ad4-2225713874e5\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.911765 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d8711ff7-1164-4f51-9748-d563536a90d3-tls-secret\") pod \"logging-loki-gateway-6b7bc6b4d8-mpc5k\" (UID: \"d8711ff7-1164-4f51-9748-d563536a90d3\") " pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.933477 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.934566 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.937135 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.938629 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Nov 21 13:48:59 crc kubenswrapper[4675]: I1121 13:48:59.938904 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008257 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008315 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008335 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008355 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008383 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jhfb\" (UniqueName: \"kubernetes.io/projected/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-kube-api-access-8jhfb\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008405 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/69149637-8974-41a7-b494-3db4c647e9de-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008432 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-config\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008458 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-837c80dd-a367-4cd1-ac94-13159f73fce1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-837c80dd-a367-4cd1-ac94-13159f73fce1\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008481 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/69149637-8974-41a7-b494-3db4c647e9de-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008498 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nblv\" (UniqueName: \"kubernetes.io/projected/69149637-8974-41a7-b494-3db4c647e9de-kube-api-access-9nblv\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008519 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69149637-8974-41a7-b494-3db4c647e9de-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008542 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-96ca6582-8d94-441e-8ad4-2225713874e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96ca6582-8d94-441e-8ad4-2225713874e5\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008581 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f312a43b-140a-4582-a9be-57698799461e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f312a43b-140a-4582-a9be-57698799461e\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008608 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69149637-8974-41a7-b494-3db4c647e9de-config\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.008631 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/69149637-8974-41a7-b494-3db4c647e9de-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.009867 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.010221 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-config\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.012533 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.012687 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.012945 4675 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.012962 4675 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.012987 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-96ca6582-8d94-441e-8ad4-2225713874e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96ca6582-8d94-441e-8ad4-2225713874e5\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/01a597f36e094257d14ba3e042fd5b37fa695d02740dc1469f7882481341b5e8/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.013012 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-837c80dd-a367-4cd1-ac94-13159f73fce1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-837c80dd-a367-4cd1-ac94-13159f73fce1\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8968f3155d68d0787c3aaa0c207cfd0c32dacd440b3157fbedd3fa1e36e7f061/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.013754 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.027692 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jhfb\" (UniqueName: \"kubernetes.io/projected/f7b8a2bc-e416-4521-9fa2-44dd6bd69400-kube-api-access-8jhfb\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.031642 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.037713 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.041278 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.041403 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.044302 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.050761 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-96ca6582-8d94-441e-8ad4-2225713874e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96ca6582-8d94-441e-8ad4-2225713874e5\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.055197 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-837c80dd-a367-4cd1-ac94-13159f73fce1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-837c80dd-a367-4cd1-ac94-13159f73fce1\") pod \"logging-loki-ingester-0\" (UID: \"f7b8a2bc-e416-4521-9fa2-44dd6bd69400\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.069000 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.110388 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/69149637-8974-41a7-b494-3db4c647e9de-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.110450 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/99ed9558-944c-4917-9daf-657bc7f2cbf1-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.110482 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ed9558-944c-4917-9daf-657bc7f2cbf1-config\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.110512 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/69149637-8974-41a7-b494-3db4c647e9de-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.110543 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-02062eaa-bc1e-488b-9afc-eec0d917f80f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-02062eaa-bc1e-488b-9afc-eec0d917f80f\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.110583 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/99ed9558-944c-4917-9daf-657bc7f2cbf1-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.110606 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ldx\" (UniqueName: \"kubernetes.io/projected/99ed9558-944c-4917-9daf-657bc7f2cbf1-kube-api-access-d6ldx\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.110633 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/69149637-8974-41a7-b494-3db4c647e9de-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.110651 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nblv\" (UniqueName: \"kubernetes.io/projected/69149637-8974-41a7-b494-3db4c647e9de-kube-api-access-9nblv\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.110669 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69149637-8974-41a7-b494-3db4c647e9de-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.110698 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f312a43b-140a-4582-a9be-57698799461e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f312a43b-140a-4582-a9be-57698799461e\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.110719 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99ed9558-944c-4917-9daf-657bc7f2cbf1-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.110737 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69149637-8974-41a7-b494-3db4c647e9de-config\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.110772 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/99ed9558-944c-4917-9daf-657bc7f2cbf1-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.112011 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69149637-8974-41a7-b494-3db4c647e9de-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.112459 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69149637-8974-41a7-b494-3db4c647e9de-config\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.113745 4675 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.113773 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f312a43b-140a-4582-a9be-57698799461e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f312a43b-140a-4582-a9be-57698799461e\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d32196dc00043ee3a585e024aab2eba1afb0854d0fbecccc2ed0070929836c36/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.114313 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.122600 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/69149637-8974-41a7-b494-3db4c647e9de-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.122684 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/69149637-8974-41a7-b494-3db4c647e9de-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.122984 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/69149637-8974-41a7-b494-3db4c647e9de-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.128054 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nblv\" (UniqueName: \"kubernetes.io/projected/69149637-8974-41a7-b494-3db4c647e9de-kube-api-access-9nblv\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.146084 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f312a43b-140a-4582-a9be-57698799461e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f312a43b-140a-4582-a9be-57698799461e\") pod \"logging-loki-compactor-0\" (UID: \"69149637-8974-41a7-b494-3db4c647e9de\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.160994 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.212265 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/99ed9558-944c-4917-9daf-657bc7f2cbf1-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.212326 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/99ed9558-944c-4917-9daf-657bc7f2cbf1-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.212352 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ed9558-944c-4917-9daf-657bc7f2cbf1-config\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.212391 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-02062eaa-bc1e-488b-9afc-eec0d917f80f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-02062eaa-bc1e-488b-9afc-eec0d917f80f\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.212418 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/99ed9558-944c-4917-9daf-657bc7f2cbf1-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.212437 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6ldx\" (UniqueName: \"kubernetes.io/projected/99ed9558-944c-4917-9daf-657bc7f2cbf1-kube-api-access-d6ldx\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.212494 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99ed9558-944c-4917-9daf-657bc7f2cbf1-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.214947 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ed9558-944c-4917-9daf-657bc7f2cbf1-config\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.219383 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99ed9558-944c-4917-9daf-657bc7f2cbf1-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.219819 4675 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.219857 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-02062eaa-bc1e-488b-9afc-eec0d917f80f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-02062eaa-bc1e-488b-9afc-eec0d917f80f\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94957d568bd05ca9812d8e0c4213441848668eefda1dc2585c4e23cc70a806d2/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.219865 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/99ed9558-944c-4917-9daf-657bc7f2cbf1-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.220078 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/99ed9558-944c-4917-9daf-657bc7f2cbf1-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.232542 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6ldx\" (UniqueName: \"kubernetes.io/projected/99ed9558-944c-4917-9daf-657bc7f2cbf1-kube-api-access-d6ldx\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.238868 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/99ed9558-944c-4917-9daf-657bc7f2cbf1-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.253753 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft"] Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.257080 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-02062eaa-bc1e-488b-9afc-eec0d917f80f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-02062eaa-bc1e-488b-9afc-eec0d917f80f\") pod \"logging-loki-index-gateway-0\" (UID: \"99ed9558-944c-4917-9daf-657bc7f2cbf1\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: W1121 13:49:00.257493 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c26c6d8_717f_4d7d_9a42_bdb65213fe5c.slice/crio-f3ce939ffe4b7b61c123339ea01c767f271bc8e4682c6eacb39fbb5c00a27c57 WatchSource:0}: Error finding container f3ce939ffe4b7b61c123339ea01c767f271bc8e4682c6eacb39fbb5c00a27c57: Status 404 returned error can't find the container with id f3ce939ffe4b7b61c123339ea01c767f271bc8e4682c6eacb39fbb5c00a27c57 Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.269218 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.376709 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.544573 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k"] Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.624799 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 21 13:49:00 crc kubenswrapper[4675]: W1121 13:49:00.707201 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69149637_8974_41a7_b494_3db4c647e9de.slice/crio-3862b544d14f657c0e97258febe265130946f269c9eec0deae9bd01c5e64de54 WatchSource:0}: Error finding container 3862b544d14f657c0e97258febe265130946f269c9eec0deae9bd01c5e64de54: Status 404 returned error can't find the container with id 3862b544d14f657c0e97258febe265130946f269c9eec0deae9bd01c5e64de54 Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.710462 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.779001 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 21 13:49:00 crc kubenswrapper[4675]: W1121 13:49:00.794249 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99ed9558_944c_4917_9daf_657bc7f2cbf1.slice/crio-d678a721dbb1dafe5292bb9909516156c680353a1a35eecd3d35a0f199bb3e4e WatchSource:0}: Error finding container d678a721dbb1dafe5292bb9909516156c680353a1a35eecd3d35a0f199bb3e4e: Status 404 returned error can't find the container with id d678a721dbb1dafe5292bb9909516156c680353a1a35eecd3d35a0f199bb3e4e Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.905532 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" event={"ID":"d8711ff7-1164-4f51-9748-d563536a90d3","Type":"ContainerStarted","Data":"0e9963fa4a9a2104352925c67f3311ee8c129ca8975754c618bb7f9ab7b2115d"} Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.906585 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"99ed9558-944c-4917-9daf-657bc7f2cbf1","Type":"ContainerStarted","Data":"d678a721dbb1dafe5292bb9909516156c680353a1a35eecd3d35a0f199bb3e4e"} Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.907699 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"f7b8a2bc-e416-4521-9fa2-44dd6bd69400","Type":"ContainerStarted","Data":"10b48830b8ae712d6c3b4461b87fb9ffda95655b959d31dba0c0dc2d03476544"} Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.908860 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"69149637-8974-41a7-b494-3db4c647e9de","Type":"ContainerStarted","Data":"3862b544d14f657c0e97258febe265130946f269c9eec0deae9bd01c5e64de54"} Nov 21 13:49:00 crc kubenswrapper[4675]: I1121 13:49:00.909884 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" event={"ID":"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c","Type":"ContainerStarted","Data":"f3ce939ffe4b7b61c123339ea01c767f271bc8e4682c6eacb39fbb5c00a27c57"} Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.952423 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"69149637-8974-41a7-b494-3db4c647e9de","Type":"ContainerStarted","Data":"6ab0d1c79a291c3bbf5d24a492a0589cf0a189bf65ad8b2010680584c07ca7da"} Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.953194 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.955584 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" event={"ID":"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c","Type":"ContainerStarted","Data":"a8fd9711931040f566024c23d0faad288998c2db0d5029eb796d7fa969b43e27"} Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.958986 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" event={"ID":"fcc0cd18-60e3-4d70-8504-a0987a0cea4e","Type":"ContainerStarted","Data":"bf12227d672fc22816dec28d5f34afe0285c25839aafebcfe219d36a3fd96080"} Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.959044 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.961081 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" event={"ID":"d8711ff7-1164-4f51-9748-d563536a90d3","Type":"ContainerStarted","Data":"9f54add4e7a0c2454fad55c43f6dbfd5118b6f8d9635d41404d298f2f7bb1670"} Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.969925 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" event={"ID":"15313a35-1860-458c-9520-8eb44937ad1d","Type":"ContainerStarted","Data":"af023a436e290dcf18e190dcfde1b633629dcb7c55fddb5721a310211a7ad100"} Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.976034 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.979972 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"99ed9558-944c-4917-9daf-657bc7f2cbf1","Type":"ContainerStarted","Data":"a23f6d580ef950926acdc838e04d45e5c2ef7a9d8e4d294c02d573c9f29c42eb"} Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.980122 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.984384 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" event={"ID":"c804a918-f222-49f2-87b7-14b0dae0d37f","Type":"ContainerStarted","Data":"1a2da8d53c57556af44ba8f3c3ce6e9a13b2e43206c9b26e97496b5175f883f5"} Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.985081 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.990323 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.769182131 podStartE2EDuration="7.990306585s" podCreationTimestamp="2025-11-21 13:48:58 +0000 UTC" firstStartedPulling="2025-11-21 13:49:00.710129459 +0000 UTC m=+1017.436544186" lastFinishedPulling="2025-11-21 13:49:04.931253913 +0000 UTC m=+1021.657668640" observedRunningTime="2025-11-21 13:49:05.986418276 +0000 UTC m=+1022.712833003" watchObservedRunningTime="2025-11-21 13:49:05.990306585 +0000 UTC m=+1022.716721312" Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.996296 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"f7b8a2bc-e416-4521-9fa2-44dd6bd69400","Type":"ContainerStarted","Data":"4b6d61afc3885d782913a8822b734b2617f97e4a35854b1df7fbad1c905e661d"} Nov 21 13:49:05 crc kubenswrapper[4675]: I1121 13:49:05.996609 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:49:06 crc kubenswrapper[4675]: I1121 13:49:06.009619 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" podStartSLOduration=3.033987762 podStartE2EDuration="8.009593737s" podCreationTimestamp="2025-11-21 13:48:58 +0000 UTC" firstStartedPulling="2025-11-21 13:48:59.883880959 +0000 UTC m=+1016.610295686" lastFinishedPulling="2025-11-21 13:49:04.859486934 +0000 UTC m=+1021.585901661" observedRunningTime="2025-11-21 13:49:06.008729114 +0000 UTC m=+1022.735143851" watchObservedRunningTime="2025-11-21 13:49:06.009593737 +0000 UTC m=+1022.736008464" Nov 21 13:49:06 crc kubenswrapper[4675]: I1121 13:49:06.035042 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" podStartSLOduration=2.695918337 podStartE2EDuration="8.035024104s" podCreationTimestamp="2025-11-21 13:48:58 +0000 UTC" firstStartedPulling="2025-11-21 13:48:59.541863065 +0000 UTC m=+1016.268277792" lastFinishedPulling="2025-11-21 13:49:04.880968832 +0000 UTC m=+1021.607383559" observedRunningTime="2025-11-21 13:49:06.024958018 +0000 UTC m=+1022.751372755" watchObservedRunningTime="2025-11-21 13:49:06.035024104 +0000 UTC m=+1022.761438831" Nov 21 13:49:06 crc kubenswrapper[4675]: I1121 13:49:06.040870 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=2.961238204 podStartE2EDuration="7.040852143s" podCreationTimestamp="2025-11-21 13:48:59 +0000 UTC" firstStartedPulling="2025-11-21 13:49:00.79535173 +0000 UTC m=+1017.521766447" lastFinishedPulling="2025-11-21 13:49:04.874965659 +0000 UTC m=+1021.601380386" observedRunningTime="2025-11-21 13:49:06.038013181 +0000 UTC m=+1022.764427908" watchObservedRunningTime="2025-11-21 13:49:06.040852143 +0000 UTC m=+1022.767266870" Nov 21 13:49:06 crc kubenswrapper[4675]: I1121 13:49:06.062032 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" podStartSLOduration=2.960878998 podStartE2EDuration="8.062012172s" podCreationTimestamp="2025-11-21 13:48:58 +0000 UTC" firstStartedPulling="2025-11-21 13:48:59.734904363 +0000 UTC m=+1016.461319090" lastFinishedPulling="2025-11-21 13:49:04.836037537 +0000 UTC m=+1021.562452264" observedRunningTime="2025-11-21 13:49:06.056767798 +0000 UTC m=+1022.783182565" watchObservedRunningTime="2025-11-21 13:49:06.062012172 +0000 UTC m=+1022.788426899" Nov 21 13:49:06 crc kubenswrapper[4675]: I1121 13:49:06.078418 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.829857358 podStartE2EDuration="8.07839868s" podCreationTimestamp="2025-11-21 13:48:58 +0000 UTC" firstStartedPulling="2025-11-21 13:49:00.631792763 +0000 UTC m=+1017.358207490" lastFinishedPulling="2025-11-21 13:49:04.880334085 +0000 UTC m=+1021.606748812" observedRunningTime="2025-11-21 13:49:06.075026454 +0000 UTC m=+1022.801441181" watchObservedRunningTime="2025-11-21 13:49:06.07839868 +0000 UTC m=+1022.804813407" Nov 21 13:49:08 crc kubenswrapper[4675]: I1121 13:49:08.012227 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" event={"ID":"d8711ff7-1164-4f51-9748-d563536a90d3","Type":"ContainerStarted","Data":"1546e35462407968992d1137c5a22673050fa782f1ed8fb7dcefe691c23b3423"} Nov 21 13:49:08 crc kubenswrapper[4675]: I1121 13:49:08.014423 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" event={"ID":"3c26c6d8-717f-4d7d-9a42-bdb65213fe5c","Type":"ContainerStarted","Data":"ca2f76704017b05a61163c6098296a38a4e3d265aff3931e7cd425263aaeb5c3"} Nov 21 13:49:09 crc kubenswrapper[4675]: I1121 13:49:09.020017 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:49:09 crc kubenswrapper[4675]: I1121 13:49:09.020057 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:49:09 crc kubenswrapper[4675]: I1121 13:49:09.020081 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:49:09 crc kubenswrapper[4675]: I1121 13:49:09.027451 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:49:09 crc kubenswrapper[4675]: I1121 13:49:09.030904 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:49:09 crc kubenswrapper[4675]: I1121 13:49:09.034100 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" Nov 21 13:49:09 crc kubenswrapper[4675]: I1121 13:49:09.041426 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" podStartSLOduration=3.484588167 podStartE2EDuration="10.041409199s" podCreationTimestamp="2025-11-21 13:48:59 +0000 UTC" firstStartedPulling="2025-11-21 13:49:00.558599949 +0000 UTC m=+1017.285014676" lastFinishedPulling="2025-11-21 13:49:07.115420981 +0000 UTC m=+1023.841835708" observedRunningTime="2025-11-21 13:49:09.039389498 +0000 UTC m=+1025.765804235" watchObservedRunningTime="2025-11-21 13:49:09.041409199 +0000 UTC m=+1025.767823926" Nov 21 13:49:09 crc kubenswrapper[4675]: I1121 13:49:09.070209 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-pflft" podStartSLOduration=3.214019394 podStartE2EDuration="10.070178162s" podCreationTimestamp="2025-11-21 13:48:59 +0000 UTC" firstStartedPulling="2025-11-21 13:49:00.265022619 +0000 UTC m=+1016.991437356" lastFinishedPulling="2025-11-21 13:49:07.121181397 +0000 UTC m=+1023.847596124" observedRunningTime="2025-11-21 13:49:09.06303423 +0000 UTC m=+1025.789448967" watchObservedRunningTime="2025-11-21 13:49:09.070178162 +0000 UTC m=+1025.796592899" Nov 21 13:49:10 crc kubenswrapper[4675]: I1121 13:49:10.029020 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:49:10 crc kubenswrapper[4675]: I1121 13:49:10.036893 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6b7bc6b4d8-mpc5k" Nov 21 13:49:16 crc kubenswrapper[4675]: I1121 13:49:16.136466 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:49:16 crc kubenswrapper[4675]: I1121 13:49:16.136837 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:49:16 crc kubenswrapper[4675]: I1121 13:49:16.136881 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:49:16 crc kubenswrapper[4675]: I1121 13:49:16.137631 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4478a9785c2c0cd8603759bbdd163dd836f7c97363478e7200b2c21e3d3682a"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:49:16 crc kubenswrapper[4675]: I1121 13:49:16.137701 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://e4478a9785c2c0cd8603759bbdd163dd836f7c97363478e7200b2c21e3d3682a" gracePeriod=600 Nov 21 13:49:19 crc kubenswrapper[4675]: I1121 13:49:19.084759 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="e4478a9785c2c0cd8603759bbdd163dd836f7c97363478e7200b2c21e3d3682a" exitCode=0 Nov 21 13:49:19 crc kubenswrapper[4675]: I1121 13:49:19.084834 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"e4478a9785c2c0cd8603759bbdd163dd836f7c97363478e7200b2c21e3d3682a"} Nov 21 13:49:19 crc kubenswrapper[4675]: I1121 13:49:19.085325 4675 scope.go:117] "RemoveContainer" containerID="ebf6c1f49ce87c01f637a7eb4718589a49885f8f4445c9b07de3609e62a4334b" Nov 21 13:49:20 crc kubenswrapper[4675]: I1121 13:49:20.096311 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"fd8ece146e7469ff47abc44df983434b24140bc8b8a19319d303006a9e5badd2"} Nov 21 13:49:20 crc kubenswrapper[4675]: I1121 13:49:20.167286 4675 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 21 13:49:20 crc kubenswrapper[4675]: I1121 13:49:20.167342 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f7b8a2bc-e416-4521-9fa2-44dd6bd69400" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 13:49:20 crc kubenswrapper[4675]: I1121 13:49:20.275129 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:49:20 crc kubenswrapper[4675]: I1121 13:49:20.387310 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:49:29 crc kubenswrapper[4675]: I1121 13:49:29.009721 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-czh7n" Nov 21 13:49:29 crc kubenswrapper[4675]: I1121 13:49:29.260434 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-tdn52" Nov 21 13:49:29 crc kubenswrapper[4675]: I1121 13:49:29.272930 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-65gdq" Nov 21 13:49:30 crc kubenswrapper[4675]: I1121 13:49:30.166406 4675 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 21 13:49:30 crc kubenswrapper[4675]: I1121 13:49:30.166467 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f7b8a2bc-e416-4521-9fa2-44dd6bd69400" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 13:49:40 crc kubenswrapper[4675]: I1121 13:49:40.167475 4675 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 21 13:49:40 crc kubenswrapper[4675]: I1121 13:49:40.168061 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f7b8a2bc-e416-4521-9fa2-44dd6bd69400" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 13:49:50 crc kubenswrapper[4675]: I1121 13:49:50.165881 4675 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 21 13:49:50 crc kubenswrapper[4675]: I1121 13:49:50.166457 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f7b8a2bc-e416-4521-9fa2-44dd6bd69400" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 13:50:00 crc kubenswrapper[4675]: I1121 13:50:00.169743 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.556716 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-5x2t8"] Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.565447 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.568471 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.568669 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-nwsk6" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.570009 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.570991 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.572385 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.584657 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.591838 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-5x2t8"] Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.635178 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-5x2t8"] Nov 21 13:50:19 crc kubenswrapper[4675]: E1121 13:50:19.635699 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-wlnnt metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-5x2t8" podUID="be79dd44-fbdb-42fe-ba7c-35a151a27832" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.651258 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-trusted-ca\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.651307 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-entrypoint\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.651345 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-collector-token\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.651369 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be79dd44-fbdb-42fe-ba7c-35a151a27832-tmp\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.651582 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlnnt\" (UniqueName: \"kubernetes.io/projected/be79dd44-fbdb-42fe-ba7c-35a151a27832-kube-api-access-wlnnt\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.651677 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/be79dd44-fbdb-42fe-ba7c-35a151a27832-datadir\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.651755 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-collector-syslog-receiver\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.651779 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-metrics\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.651955 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/be79dd44-fbdb-42fe-ba7c-35a151a27832-sa-token\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.652001 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-config-openshift-service-cacrt\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.652153 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-config\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.753893 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-config\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.753947 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-trusted-ca\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.753965 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-entrypoint\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.753992 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-collector-token\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.754012 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be79dd44-fbdb-42fe-ba7c-35a151a27832-tmp\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.754044 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlnnt\" (UniqueName: \"kubernetes.io/projected/be79dd44-fbdb-42fe-ba7c-35a151a27832-kube-api-access-wlnnt\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.754090 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/be79dd44-fbdb-42fe-ba7c-35a151a27832-datadir\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.754125 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-collector-syslog-receiver\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.754143 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-metrics\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.754191 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/be79dd44-fbdb-42fe-ba7c-35a151a27832-sa-token\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.754207 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-config-openshift-service-cacrt\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.754879 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-config\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.754946 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-config-openshift-service-cacrt\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.754950 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/be79dd44-fbdb-42fe-ba7c-35a151a27832-datadir\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.754985 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-entrypoint\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.755015 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-trusted-ca\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: E1121 13:50:19.755100 4675 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Nov 21 13:50:19 crc kubenswrapper[4675]: E1121 13:50:19.755241 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-metrics podName:be79dd44-fbdb-42fe-ba7c-35a151a27832 nodeName:}" failed. No retries permitted until 2025-11-21 13:50:20.255224836 +0000 UTC m=+1096.981639563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-metrics") pod "collector-5x2t8" (UID: "be79dd44-fbdb-42fe-ba7c-35a151a27832") : secret "collector-metrics" not found Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.760465 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be79dd44-fbdb-42fe-ba7c-35a151a27832-tmp\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.760769 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-collector-syslog-receiver\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.778543 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-collector-token\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.779851 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/be79dd44-fbdb-42fe-ba7c-35a151a27832-sa-token\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:19 crc kubenswrapper[4675]: I1121 13:50:19.780584 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlnnt\" (UniqueName: \"kubernetes.io/projected/be79dd44-fbdb-42fe-ba7c-35a151a27832-kube-api-access-wlnnt\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.261684 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-metrics\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.266579 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-metrics\") pod \"collector-5x2t8\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " pod="openshift-logging/collector-5x2t8" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.521586 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5x2t8" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.529094 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5x2t8" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.666904 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-trusted-ca\") pod \"be79dd44-fbdb-42fe-ba7c-35a151a27832\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.666969 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-collector-token\") pod \"be79dd44-fbdb-42fe-ba7c-35a151a27832\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.667049 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-config\") pod \"be79dd44-fbdb-42fe-ba7c-35a151a27832\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.667080 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/be79dd44-fbdb-42fe-ba7c-35a151a27832-sa-token\") pod \"be79dd44-fbdb-42fe-ba7c-35a151a27832\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.667106 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-collector-syslog-receiver\") pod \"be79dd44-fbdb-42fe-ba7c-35a151a27832\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.667134 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-entrypoint\") pod \"be79dd44-fbdb-42fe-ba7c-35a151a27832\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.667157 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-metrics\") pod \"be79dd44-fbdb-42fe-ba7c-35a151a27832\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.667197 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlnnt\" (UniqueName: \"kubernetes.io/projected/be79dd44-fbdb-42fe-ba7c-35a151a27832-kube-api-access-wlnnt\") pod \"be79dd44-fbdb-42fe-ba7c-35a151a27832\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.667221 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/be79dd44-fbdb-42fe-ba7c-35a151a27832-datadir\") pod \"be79dd44-fbdb-42fe-ba7c-35a151a27832\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.667242 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-config-openshift-service-cacrt\") pod \"be79dd44-fbdb-42fe-ba7c-35a151a27832\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.667270 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be79dd44-fbdb-42fe-ba7c-35a151a27832-tmp\") pod \"be79dd44-fbdb-42fe-ba7c-35a151a27832\" (UID: \"be79dd44-fbdb-42fe-ba7c-35a151a27832\") " Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.667381 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be79dd44-fbdb-42fe-ba7c-35a151a27832-datadir" (OuterVolumeSpecName: "datadir") pod "be79dd44-fbdb-42fe-ba7c-35a151a27832" (UID: "be79dd44-fbdb-42fe-ba7c-35a151a27832"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.667609 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "be79dd44-fbdb-42fe-ba7c-35a151a27832" (UID: "be79dd44-fbdb-42fe-ba7c-35a151a27832"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.667743 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "be79dd44-fbdb-42fe-ba7c-35a151a27832" (UID: "be79dd44-fbdb-42fe-ba7c-35a151a27832"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.667833 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "be79dd44-fbdb-42fe-ba7c-35a151a27832" (UID: "be79dd44-fbdb-42fe-ba7c-35a151a27832"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.668198 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.668212 4675 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-entrypoint\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.668220 4675 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/be79dd44-fbdb-42fe-ba7c-35a151a27832-datadir\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.668229 4675 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.668610 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-config" (OuterVolumeSpecName: "config") pod "be79dd44-fbdb-42fe-ba7c-35a151a27832" (UID: "be79dd44-fbdb-42fe-ba7c-35a151a27832"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.670883 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-collector-token" (OuterVolumeSpecName: "collector-token") pod "be79dd44-fbdb-42fe-ba7c-35a151a27832" (UID: "be79dd44-fbdb-42fe-ba7c-35a151a27832"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.671397 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be79dd44-fbdb-42fe-ba7c-35a151a27832-tmp" (OuterVolumeSpecName: "tmp") pod "be79dd44-fbdb-42fe-ba7c-35a151a27832" (UID: "be79dd44-fbdb-42fe-ba7c-35a151a27832"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.671832 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be79dd44-fbdb-42fe-ba7c-35a151a27832-kube-api-access-wlnnt" (OuterVolumeSpecName: "kube-api-access-wlnnt") pod "be79dd44-fbdb-42fe-ba7c-35a151a27832" (UID: "be79dd44-fbdb-42fe-ba7c-35a151a27832"). InnerVolumeSpecName "kube-api-access-wlnnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.672291 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be79dd44-fbdb-42fe-ba7c-35a151a27832-sa-token" (OuterVolumeSpecName: "sa-token") pod "be79dd44-fbdb-42fe-ba7c-35a151a27832" (UID: "be79dd44-fbdb-42fe-ba7c-35a151a27832"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.674031 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "be79dd44-fbdb-42fe-ba7c-35a151a27832" (UID: "be79dd44-fbdb-42fe-ba7c-35a151a27832"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.674125 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-metrics" (OuterVolumeSpecName: "metrics") pod "be79dd44-fbdb-42fe-ba7c-35a151a27832" (UID: "be79dd44-fbdb-42fe-ba7c-35a151a27832"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.770095 4675 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-metrics\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.770128 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlnnt\" (UniqueName: \"kubernetes.io/projected/be79dd44-fbdb-42fe-ba7c-35a151a27832-kube-api-access-wlnnt\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.770141 4675 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be79dd44-fbdb-42fe-ba7c-35a151a27832-tmp\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.770176 4675 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-collector-token\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.770188 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be79dd44-fbdb-42fe-ba7c-35a151a27832-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.770195 4675 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/be79dd44-fbdb-42fe-ba7c-35a151a27832-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:20 crc kubenswrapper[4675]: I1121 13:50:20.770204 4675 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/be79dd44-fbdb-42fe-ba7c-35a151a27832-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.528785 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5x2t8" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.573973 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-5x2t8"] Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.580583 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-5x2t8"] Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.600498 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-4nf2w"] Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.602207 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.607396 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.607503 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.607792 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-nwsk6" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.607910 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.607353 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.617500 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.626316 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-4nf2w"] Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.683643 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bbab1657-ebec-4d72-92b4-765a9fb4bd21-sa-token\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.683727 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bbab1657-ebec-4d72-92b4-765a9fb4bd21-tmp\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.683754 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bbab1657-ebec-4d72-92b4-765a9fb4bd21-config-openshift-service-cacrt\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.683781 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab1657-ebec-4d72-92b4-765a9fb4bd21-config\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.683803 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bbab1657-ebec-4d72-92b4-765a9fb4bd21-datadir\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.683826 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbab1657-ebec-4d72-92b4-765a9fb4bd21-trusted-ca\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.683854 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bbab1657-ebec-4d72-92b4-765a9fb4bd21-entrypoint\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.683890 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bbab1657-ebec-4d72-92b4-765a9fb4bd21-metrics\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.683922 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bbab1657-ebec-4d72-92b4-765a9fb4bd21-collector-token\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.683945 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx6kr\" (UniqueName: \"kubernetes.io/projected/bbab1657-ebec-4d72-92b4-765a9fb4bd21-kube-api-access-mx6kr\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.683972 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bbab1657-ebec-4d72-92b4-765a9fb4bd21-collector-syslog-receiver\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.785084 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bbab1657-ebec-4d72-92b4-765a9fb4bd21-tmp\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.785132 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bbab1657-ebec-4d72-92b4-765a9fb4bd21-config-openshift-service-cacrt\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.785159 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab1657-ebec-4d72-92b4-765a9fb4bd21-config\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.785178 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bbab1657-ebec-4d72-92b4-765a9fb4bd21-datadir\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.785197 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbab1657-ebec-4d72-92b4-765a9fb4bd21-trusted-ca\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.785214 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bbab1657-ebec-4d72-92b4-765a9fb4bd21-entrypoint\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.785244 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bbab1657-ebec-4d72-92b4-765a9fb4bd21-metrics\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.785268 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bbab1657-ebec-4d72-92b4-765a9fb4bd21-collector-token\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.785286 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx6kr\" (UniqueName: \"kubernetes.io/projected/bbab1657-ebec-4d72-92b4-765a9fb4bd21-kube-api-access-mx6kr\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.785304 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bbab1657-ebec-4d72-92b4-765a9fb4bd21-collector-syslog-receiver\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.785346 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bbab1657-ebec-4d72-92b4-765a9fb4bd21-sa-token\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.786520 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab1657-ebec-4d72-92b4-765a9fb4bd21-config\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.786556 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bbab1657-ebec-4d72-92b4-765a9fb4bd21-datadir\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.786520 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bbab1657-ebec-4d72-92b4-765a9fb4bd21-entrypoint\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.787111 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bbab1657-ebec-4d72-92b4-765a9fb4bd21-config-openshift-service-cacrt\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.787272 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbab1657-ebec-4d72-92b4-765a9fb4bd21-trusted-ca\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.792445 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bbab1657-ebec-4d72-92b4-765a9fb4bd21-tmp\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.794966 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bbab1657-ebec-4d72-92b4-765a9fb4bd21-metrics\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.798627 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bbab1657-ebec-4d72-92b4-765a9fb4bd21-collector-syslog-receiver\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.799047 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bbab1657-ebec-4d72-92b4-765a9fb4bd21-collector-token\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.804752 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bbab1657-ebec-4d72-92b4-765a9fb4bd21-sa-token\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.810729 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx6kr\" (UniqueName: \"kubernetes.io/projected/bbab1657-ebec-4d72-92b4-765a9fb4bd21-kube-api-access-mx6kr\") pod \"collector-4nf2w\" (UID: \"bbab1657-ebec-4d72-92b4-765a9fb4bd21\") " pod="openshift-logging/collector-4nf2w" Nov 21 13:50:21 crc kubenswrapper[4675]: I1121 13:50:21.921689 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4nf2w" Nov 21 13:50:22 crc kubenswrapper[4675]: I1121 13:50:22.152831 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-4nf2w"] Nov 21 13:50:22 crc kubenswrapper[4675]: I1121 13:50:22.537095 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-4nf2w" event={"ID":"bbab1657-ebec-4d72-92b4-765a9fb4bd21","Type":"ContainerStarted","Data":"166fe341ddd5a65fce43a4ff91a0d79d35c21d8ffd70e60a0b07429ae7a06f4f"} Nov 21 13:50:22 crc kubenswrapper[4675]: I1121 13:50:22.859517 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be79dd44-fbdb-42fe-ba7c-35a151a27832" path="/var/lib/kubelet/pods/be79dd44-fbdb-42fe-ba7c-35a151a27832/volumes" Nov 21 13:50:31 crc kubenswrapper[4675]: I1121 13:50:31.609756 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-4nf2w" event={"ID":"bbab1657-ebec-4d72-92b4-765a9fb4bd21","Type":"ContainerStarted","Data":"b28bfeea81de847a7e65940ea6c1503ed3f2296d3c735c145e46987847607e76"} Nov 21 13:50:31 crc kubenswrapper[4675]: I1121 13:50:31.636240 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-4nf2w" podStartSLOduration=2.181118889 podStartE2EDuration="10.636222733s" podCreationTimestamp="2025-11-21 13:50:21 +0000 UTC" firstStartedPulling="2025-11-21 13:50:22.173705232 +0000 UTC m=+1098.900119959" lastFinishedPulling="2025-11-21 13:50:30.628809076 +0000 UTC m=+1107.355223803" observedRunningTime="2025-11-21 13:50:31.628618501 +0000 UTC m=+1108.355033238" watchObservedRunningTime="2025-11-21 13:50:31.636222733 +0000 UTC m=+1108.362637450" Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.195706 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn"] Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.197826 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.199657 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.211163 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn"] Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.344602 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54bee570-2720-4507-89bf-23f1095205a2-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn\" (UID: \"54bee570-2720-4507-89bf-23f1095205a2\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.344671 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmxvk\" (UniqueName: \"kubernetes.io/projected/54bee570-2720-4507-89bf-23f1095205a2-kube-api-access-hmxvk\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn\" (UID: \"54bee570-2720-4507-89bf-23f1095205a2\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.344719 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54bee570-2720-4507-89bf-23f1095205a2-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn\" (UID: \"54bee570-2720-4507-89bf-23f1095205a2\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.446239 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54bee570-2720-4507-89bf-23f1095205a2-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn\" (UID: \"54bee570-2720-4507-89bf-23f1095205a2\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.446304 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmxvk\" (UniqueName: \"kubernetes.io/projected/54bee570-2720-4507-89bf-23f1095205a2-kube-api-access-hmxvk\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn\" (UID: \"54bee570-2720-4507-89bf-23f1095205a2\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.446339 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54bee570-2720-4507-89bf-23f1095205a2-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn\" (UID: \"54bee570-2720-4507-89bf-23f1095205a2\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.446911 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54bee570-2720-4507-89bf-23f1095205a2-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn\" (UID: \"54bee570-2720-4507-89bf-23f1095205a2\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.447277 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54bee570-2720-4507-89bf-23f1095205a2-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn\" (UID: \"54bee570-2720-4507-89bf-23f1095205a2\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.464039 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmxvk\" (UniqueName: \"kubernetes.io/projected/54bee570-2720-4507-89bf-23f1095205a2-kube-api-access-hmxvk\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn\" (UID: \"54bee570-2720-4507-89bf-23f1095205a2\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.558536 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" Nov 21 13:50:57 crc kubenswrapper[4675]: I1121 13:50:57.819657 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn"] Nov 21 13:50:57 crc kubenswrapper[4675]: W1121 13:50:57.828233 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54bee570_2720_4507_89bf_23f1095205a2.slice/crio-04f3c613448747e7e882b772d22dda6e00635246c4207f941060436edc35df91 WatchSource:0}: Error finding container 04f3c613448747e7e882b772d22dda6e00635246c4207f941060436edc35df91: Status 404 returned error can't find the container with id 04f3c613448747e7e882b772d22dda6e00635246c4207f941060436edc35df91 Nov 21 13:50:58 crc kubenswrapper[4675]: I1121 13:50:58.814978 4675 generic.go:334] "Generic (PLEG): container finished" podID="54bee570-2720-4507-89bf-23f1095205a2" containerID="961dd2f2ac1c4e55099edeae51e3f52fa1b51db5a5191cd95a4b0d09ce806f3b" exitCode=0 Nov 21 13:50:58 crc kubenswrapper[4675]: I1121 13:50:58.815049 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" event={"ID":"54bee570-2720-4507-89bf-23f1095205a2","Type":"ContainerDied","Data":"961dd2f2ac1c4e55099edeae51e3f52fa1b51db5a5191cd95a4b0d09ce806f3b"} Nov 21 13:50:58 crc kubenswrapper[4675]: I1121 13:50:58.815317 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" event={"ID":"54bee570-2720-4507-89bf-23f1095205a2","Type":"ContainerStarted","Data":"04f3c613448747e7e882b772d22dda6e00635246c4207f941060436edc35df91"} Nov 21 13:51:00 crc kubenswrapper[4675]: I1121 13:51:00.830266 4675 generic.go:334] "Generic (PLEG): container finished" podID="54bee570-2720-4507-89bf-23f1095205a2" containerID="c5d57d14da184948795ce3a6eb29edc5d0a45188b5fecd8226fa6b821e5cb6a4" exitCode=0 Nov 21 13:51:00 crc kubenswrapper[4675]: I1121 13:51:00.830356 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" event={"ID":"54bee570-2720-4507-89bf-23f1095205a2","Type":"ContainerDied","Data":"c5d57d14da184948795ce3a6eb29edc5d0a45188b5fecd8226fa6b821e5cb6a4"} Nov 21 13:51:01 crc kubenswrapper[4675]: I1121 13:51:01.838423 4675 generic.go:334] "Generic (PLEG): container finished" podID="54bee570-2720-4507-89bf-23f1095205a2" containerID="5791e5cf86858651c57c006f8f236fa90116c1698a9ba8ac6ad6a9f8db5466bf" exitCode=0 Nov 21 13:51:01 crc kubenswrapper[4675]: I1121 13:51:01.838606 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" event={"ID":"54bee570-2720-4507-89bf-23f1095205a2","Type":"ContainerDied","Data":"5791e5cf86858651c57c006f8f236fa90116c1698a9ba8ac6ad6a9f8db5466bf"} Nov 21 13:51:03 crc kubenswrapper[4675]: I1121 13:51:03.108808 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" Nov 21 13:51:03 crc kubenswrapper[4675]: I1121 13:51:03.148059 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54bee570-2720-4507-89bf-23f1095205a2-bundle\") pod \"54bee570-2720-4507-89bf-23f1095205a2\" (UID: \"54bee570-2720-4507-89bf-23f1095205a2\") " Nov 21 13:51:03 crc kubenswrapper[4675]: I1121 13:51:03.148128 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54bee570-2720-4507-89bf-23f1095205a2-util\") pod \"54bee570-2720-4507-89bf-23f1095205a2\" (UID: \"54bee570-2720-4507-89bf-23f1095205a2\") " Nov 21 13:51:03 crc kubenswrapper[4675]: I1121 13:51:03.148201 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmxvk\" (UniqueName: \"kubernetes.io/projected/54bee570-2720-4507-89bf-23f1095205a2-kube-api-access-hmxvk\") pod \"54bee570-2720-4507-89bf-23f1095205a2\" (UID: \"54bee570-2720-4507-89bf-23f1095205a2\") " Nov 21 13:51:03 crc kubenswrapper[4675]: I1121 13:51:03.148932 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54bee570-2720-4507-89bf-23f1095205a2-bundle" (OuterVolumeSpecName: "bundle") pod "54bee570-2720-4507-89bf-23f1095205a2" (UID: "54bee570-2720-4507-89bf-23f1095205a2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:51:03 crc kubenswrapper[4675]: I1121 13:51:03.152714 4675 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54bee570-2720-4507-89bf-23f1095205a2-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:51:03 crc kubenswrapper[4675]: I1121 13:51:03.158377 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54bee570-2720-4507-89bf-23f1095205a2-kube-api-access-hmxvk" (OuterVolumeSpecName: "kube-api-access-hmxvk") pod "54bee570-2720-4507-89bf-23f1095205a2" (UID: "54bee570-2720-4507-89bf-23f1095205a2"). InnerVolumeSpecName "kube-api-access-hmxvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:51:03 crc kubenswrapper[4675]: I1121 13:51:03.165523 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54bee570-2720-4507-89bf-23f1095205a2-util" (OuterVolumeSpecName: "util") pod "54bee570-2720-4507-89bf-23f1095205a2" (UID: "54bee570-2720-4507-89bf-23f1095205a2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:51:03 crc kubenswrapper[4675]: I1121 13:51:03.254637 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmxvk\" (UniqueName: \"kubernetes.io/projected/54bee570-2720-4507-89bf-23f1095205a2-kube-api-access-hmxvk\") on node \"crc\" DevicePath \"\"" Nov 21 13:51:03 crc kubenswrapper[4675]: I1121 13:51:03.254678 4675 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54bee570-2720-4507-89bf-23f1095205a2-util\") on node \"crc\" DevicePath \"\"" Nov 21 13:51:03 crc kubenswrapper[4675]: I1121 13:51:03.854835 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" event={"ID":"54bee570-2720-4507-89bf-23f1095205a2","Type":"ContainerDied","Data":"04f3c613448747e7e882b772d22dda6e00635246c4207f941060436edc35df91"} Nov 21 13:51:03 crc kubenswrapper[4675]: I1121 13:51:03.854884 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04f3c613448747e7e882b772d22dda6e00635246c4207f941060436edc35df91" Nov 21 13:51:03 crc kubenswrapper[4675]: I1121 13:51:03.854927 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn" Nov 21 13:51:06 crc kubenswrapper[4675]: I1121 13:51:06.753477 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-2hlgb"] Nov 21 13:51:06 crc kubenswrapper[4675]: E1121 13:51:06.754305 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bee570-2720-4507-89bf-23f1095205a2" containerName="util" Nov 21 13:51:06 crc kubenswrapper[4675]: I1121 13:51:06.754318 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bee570-2720-4507-89bf-23f1095205a2" containerName="util" Nov 21 13:51:06 crc kubenswrapper[4675]: E1121 13:51:06.754331 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bee570-2720-4507-89bf-23f1095205a2" containerName="extract" Nov 21 13:51:06 crc kubenswrapper[4675]: I1121 13:51:06.754337 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bee570-2720-4507-89bf-23f1095205a2" containerName="extract" Nov 21 13:51:06 crc kubenswrapper[4675]: E1121 13:51:06.754362 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bee570-2720-4507-89bf-23f1095205a2" containerName="pull" Nov 21 13:51:06 crc kubenswrapper[4675]: I1121 13:51:06.754369 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bee570-2720-4507-89bf-23f1095205a2" containerName="pull" Nov 21 13:51:06 crc kubenswrapper[4675]: I1121 13:51:06.754496 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="54bee570-2720-4507-89bf-23f1095205a2" containerName="extract" Nov 21 13:51:06 crc kubenswrapper[4675]: I1121 13:51:06.755017 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-2hlgb" Nov 21 13:51:06 crc kubenswrapper[4675]: I1121 13:51:06.756642 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-cr68x" Nov 21 13:51:06 crc kubenswrapper[4675]: I1121 13:51:06.758645 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 21 13:51:06 crc kubenswrapper[4675]: I1121 13:51:06.760375 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 21 13:51:06 crc kubenswrapper[4675]: I1121 13:51:06.766551 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-2hlgb"] Nov 21 13:51:06 crc kubenswrapper[4675]: I1121 13:51:06.821973 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ss48\" (UniqueName: \"kubernetes.io/projected/83e5bca9-014a-43f4-8b6b-f4a4052ed662-kube-api-access-8ss48\") pod \"nmstate-operator-557fdffb88-2hlgb\" (UID: \"83e5bca9-014a-43f4-8b6b-f4a4052ed662\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-2hlgb" Nov 21 13:51:06 crc kubenswrapper[4675]: I1121 13:51:06.923053 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ss48\" (UniqueName: \"kubernetes.io/projected/83e5bca9-014a-43f4-8b6b-f4a4052ed662-kube-api-access-8ss48\") pod \"nmstate-operator-557fdffb88-2hlgb\" (UID: \"83e5bca9-014a-43f4-8b6b-f4a4052ed662\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-2hlgb" Nov 21 13:51:06 crc kubenswrapper[4675]: I1121 13:51:06.941608 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ss48\" (UniqueName: \"kubernetes.io/projected/83e5bca9-014a-43f4-8b6b-f4a4052ed662-kube-api-access-8ss48\") pod \"nmstate-operator-557fdffb88-2hlgb\" (UID: \"83e5bca9-014a-43f4-8b6b-f4a4052ed662\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-2hlgb" Nov 21 13:51:07 crc kubenswrapper[4675]: I1121 13:51:07.118708 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-2hlgb" Nov 21 13:51:07 crc kubenswrapper[4675]: I1121 13:51:07.383349 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-2hlgb"] Nov 21 13:51:07 crc kubenswrapper[4675]: I1121 13:51:07.879669 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-2hlgb" event={"ID":"83e5bca9-014a-43f4-8b6b-f4a4052ed662","Type":"ContainerStarted","Data":"4b05a14d91b84ef3c9f8dadaba1e30a7eb50a02be11dd302b4216846443056c4"} Nov 21 13:51:09 crc kubenswrapper[4675]: I1121 13:51:09.896026 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-2hlgb" event={"ID":"83e5bca9-014a-43f4-8b6b-f4a4052ed662","Type":"ContainerStarted","Data":"7025eb948d1bf0494da8f832638a1ebbc4935a5d503051fa6864d3134bfb8b02"} Nov 21 13:51:09 crc kubenswrapper[4675]: I1121 13:51:09.916210 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-2hlgb" podStartSLOduration=1.701741892 podStartE2EDuration="3.916185015s" podCreationTimestamp="2025-11-21 13:51:06 +0000 UTC" firstStartedPulling="2025-11-21 13:51:07.394595796 +0000 UTC m=+1144.121010523" lastFinishedPulling="2025-11-21 13:51:09.609038919 +0000 UTC m=+1146.335453646" observedRunningTime="2025-11-21 13:51:09.91128221 +0000 UTC m=+1146.637696937" watchObservedRunningTime="2025-11-21 13:51:09.916185015 +0000 UTC m=+1146.642599742" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.811617 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-4m8jq"] Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.813091 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4m8jq" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.815382 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-7bnmq" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.835932 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-4m8jq"] Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.867865 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm"] Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.869124 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.874366 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.893126 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm"] Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.893938 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hllgv\" (UniqueName: \"kubernetes.io/projected/f304a655-1aaa-43a3-81c1-32e5214c02cf-kube-api-access-hllgv\") pod \"nmstate-webhook-6b89b748d8-9ztsm\" (UID: \"f304a655-1aaa-43a3-81c1-32e5214c02cf\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.894002 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqpq7\" (UniqueName: \"kubernetes.io/projected/32581066-5208-499f-8473-d7002fd31dca-kube-api-access-rqpq7\") pod \"nmstate-metrics-5dcf9c57c5-4m8jq\" (UID: \"32581066-5208-499f-8473-d7002fd31dca\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4m8jq" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.894039 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f304a655-1aaa-43a3-81c1-32e5214c02cf-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-9ztsm\" (UID: \"f304a655-1aaa-43a3-81c1-32e5214c02cf\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.916965 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-qvplj"] Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.925007 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.997709 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pc58\" (UniqueName: \"kubernetes.io/projected/de0ab26e-1ac3-48eb-9647-f55c0249b9ec-kube-api-access-7pc58\") pod \"nmstate-handler-qvplj\" (UID: \"de0ab26e-1ac3-48eb-9647-f55c0249b9ec\") " pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.997751 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/de0ab26e-1ac3-48eb-9647-f55c0249b9ec-nmstate-lock\") pod \"nmstate-handler-qvplj\" (UID: \"de0ab26e-1ac3-48eb-9647-f55c0249b9ec\") " pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.997832 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hllgv\" (UniqueName: \"kubernetes.io/projected/f304a655-1aaa-43a3-81c1-32e5214c02cf-kube-api-access-hllgv\") pod \"nmstate-webhook-6b89b748d8-9ztsm\" (UID: \"f304a655-1aaa-43a3-81c1-32e5214c02cf\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.997873 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqpq7\" (UniqueName: \"kubernetes.io/projected/32581066-5208-499f-8473-d7002fd31dca-kube-api-access-rqpq7\") pod \"nmstate-metrics-5dcf9c57c5-4m8jq\" (UID: \"32581066-5208-499f-8473-d7002fd31dca\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4m8jq" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.997925 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/de0ab26e-1ac3-48eb-9647-f55c0249b9ec-ovs-socket\") pod \"nmstate-handler-qvplj\" (UID: \"de0ab26e-1ac3-48eb-9647-f55c0249b9ec\") " pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.997956 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f304a655-1aaa-43a3-81c1-32e5214c02cf-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-9ztsm\" (UID: \"f304a655-1aaa-43a3-81c1-32e5214c02cf\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm" Nov 21 13:51:10 crc kubenswrapper[4675]: I1121 13:51:10.997994 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/de0ab26e-1ac3-48eb-9647-f55c0249b9ec-dbus-socket\") pod \"nmstate-handler-qvplj\" (UID: \"de0ab26e-1ac3-48eb-9647-f55c0249b9ec\") " pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:10 crc kubenswrapper[4675]: E1121 13:51:10.998205 4675 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 21 13:51:10 crc kubenswrapper[4675]: E1121 13:51:10.998275 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f304a655-1aaa-43a3-81c1-32e5214c02cf-tls-key-pair podName:f304a655-1aaa-43a3-81c1-32e5214c02cf nodeName:}" failed. No retries permitted until 2025-11-21 13:51:11.498257034 +0000 UTC m=+1148.224671761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/f304a655-1aaa-43a3-81c1-32e5214c02cf-tls-key-pair") pod "nmstate-webhook-6b89b748d8-9ztsm" (UID: "f304a655-1aaa-43a3-81c1-32e5214c02cf") : secret "openshift-nmstate-webhook" not found Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.017664 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hllgv\" (UniqueName: \"kubernetes.io/projected/f304a655-1aaa-43a3-81c1-32e5214c02cf-kube-api-access-hllgv\") pod \"nmstate-webhook-6b89b748d8-9ztsm\" (UID: \"f304a655-1aaa-43a3-81c1-32e5214c02cf\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.028525 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqpq7\" (UniqueName: \"kubernetes.io/projected/32581066-5208-499f-8473-d7002fd31dca-kube-api-access-rqpq7\") pod \"nmstate-metrics-5dcf9c57c5-4m8jq\" (UID: \"32581066-5208-499f-8473-d7002fd31dca\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4m8jq" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.091975 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x"] Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.092830 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.095263 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.095677 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.096283 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-fkwzd" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.099435 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/de0ab26e-1ac3-48eb-9647-f55c0249b9ec-ovs-socket\") pod \"nmstate-handler-qvplj\" (UID: \"de0ab26e-1ac3-48eb-9647-f55c0249b9ec\") " pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.099492 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/de0ab26e-1ac3-48eb-9647-f55c0249b9ec-dbus-socket\") pod \"nmstate-handler-qvplj\" (UID: \"de0ab26e-1ac3-48eb-9647-f55c0249b9ec\") " pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.099550 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pc58\" (UniqueName: \"kubernetes.io/projected/de0ab26e-1ac3-48eb-9647-f55c0249b9ec-kube-api-access-7pc58\") pod \"nmstate-handler-qvplj\" (UID: \"de0ab26e-1ac3-48eb-9647-f55c0249b9ec\") " pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.099569 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/de0ab26e-1ac3-48eb-9647-f55c0249b9ec-nmstate-lock\") pod \"nmstate-handler-qvplj\" (UID: \"de0ab26e-1ac3-48eb-9647-f55c0249b9ec\") " pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.099696 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/de0ab26e-1ac3-48eb-9647-f55c0249b9ec-nmstate-lock\") pod \"nmstate-handler-qvplj\" (UID: \"de0ab26e-1ac3-48eb-9647-f55c0249b9ec\") " pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.099748 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/de0ab26e-1ac3-48eb-9647-f55c0249b9ec-ovs-socket\") pod \"nmstate-handler-qvplj\" (UID: \"de0ab26e-1ac3-48eb-9647-f55c0249b9ec\") " pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.099984 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/de0ab26e-1ac3-48eb-9647-f55c0249b9ec-dbus-socket\") pod \"nmstate-handler-qvplj\" (UID: \"de0ab26e-1ac3-48eb-9647-f55c0249b9ec\") " pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.121925 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pc58\" (UniqueName: \"kubernetes.io/projected/de0ab26e-1ac3-48eb-9647-f55c0249b9ec-kube-api-access-7pc58\") pod \"nmstate-handler-qvplj\" (UID: \"de0ab26e-1ac3-48eb-9647-f55c0249b9ec\") " pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.127727 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4m8jq" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.150348 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x"] Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.201294 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rng4c\" (UniqueName: \"kubernetes.io/projected/4ad50a02-1502-4a0b-8f49-32988242ec6b-kube-api-access-rng4c\") pod \"nmstate-console-plugin-5874bd7bc5-crd2x\" (UID: \"4ad50a02-1502-4a0b-8f49-32988242ec6b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.201389 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ad50a02-1502-4a0b-8f49-32988242ec6b-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-crd2x\" (UID: \"4ad50a02-1502-4a0b-8f49-32988242ec6b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.201570 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ad50a02-1502-4a0b-8f49-32988242ec6b-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-crd2x\" (UID: \"4ad50a02-1502-4a0b-8f49-32988242ec6b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.251087 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.270111 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-84b94b4484-zz6mx"] Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.271459 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.304692 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-oauth-serving-cert\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.305053 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-console-config\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.305186 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97705629-fb36-433b-9788-38401a60643b-console-oauth-config\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.305262 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw8zt\" (UniqueName: \"kubernetes.io/projected/97705629-fb36-433b-9788-38401a60643b-kube-api-access-hw8zt\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.305337 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rng4c\" (UniqueName: \"kubernetes.io/projected/4ad50a02-1502-4a0b-8f49-32988242ec6b-kube-api-access-rng4c\") pod \"nmstate-console-plugin-5874bd7bc5-crd2x\" (UID: \"4ad50a02-1502-4a0b-8f49-32988242ec6b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.305409 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97705629-fb36-433b-9788-38401a60643b-console-serving-cert\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.305501 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ad50a02-1502-4a0b-8f49-32988242ec6b-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-crd2x\" (UID: \"4ad50a02-1502-4a0b-8f49-32988242ec6b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.305581 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-trusted-ca-bundle\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.305651 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-service-ca\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.305717 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ad50a02-1502-4a0b-8f49-32988242ec6b-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-crd2x\" (UID: \"4ad50a02-1502-4a0b-8f49-32988242ec6b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" Nov 21 13:51:11 crc kubenswrapper[4675]: E1121 13:51:11.306281 4675 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 21 13:51:11 crc kubenswrapper[4675]: E1121 13:51:11.306357 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad50a02-1502-4a0b-8f49-32988242ec6b-plugin-serving-cert podName:4ad50a02-1502-4a0b-8f49-32988242ec6b nodeName:}" failed. No retries permitted until 2025-11-21 13:51:11.806337354 +0000 UTC m=+1148.532752081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/4ad50a02-1502-4a0b-8f49-32988242ec6b-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-crd2x" (UID: "4ad50a02-1502-4a0b-8f49-32988242ec6b") : secret "plugin-serving-cert" not found Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.306774 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ad50a02-1502-4a0b-8f49-32988242ec6b-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-crd2x\" (UID: \"4ad50a02-1502-4a0b-8f49-32988242ec6b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.325199 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rng4c\" (UniqueName: \"kubernetes.io/projected/4ad50a02-1502-4a0b-8f49-32988242ec6b-kube-api-access-rng4c\") pod \"nmstate-console-plugin-5874bd7bc5-crd2x\" (UID: \"4ad50a02-1502-4a0b-8f49-32988242ec6b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.331287 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84b94b4484-zz6mx"] Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.407025 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-console-config\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.407104 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97705629-fb36-433b-9788-38401a60643b-console-oauth-config\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.407123 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw8zt\" (UniqueName: \"kubernetes.io/projected/97705629-fb36-433b-9788-38401a60643b-kube-api-access-hw8zt\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.407153 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97705629-fb36-433b-9788-38401a60643b-console-serving-cert\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.407204 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-trusted-ca-bundle\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.407226 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-service-ca\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.407256 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-oauth-serving-cert\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.408602 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-service-ca\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.408720 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-console-config\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.408787 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-trusted-ca-bundle\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.409181 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-oauth-serving-cert\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.411597 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97705629-fb36-433b-9788-38401a60643b-console-serving-cert\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.412445 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97705629-fb36-433b-9788-38401a60643b-console-oauth-config\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.432842 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw8zt\" (UniqueName: \"kubernetes.io/projected/97705629-fb36-433b-9788-38401a60643b-kube-api-access-hw8zt\") pod \"console-84b94b4484-zz6mx\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.508717 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f304a655-1aaa-43a3-81c1-32e5214c02cf-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-9ztsm\" (UID: \"f304a655-1aaa-43a3-81c1-32e5214c02cf\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.512633 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f304a655-1aaa-43a3-81c1-32e5214c02cf-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-9ztsm\" (UID: \"f304a655-1aaa-43a3-81c1-32e5214c02cf\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.612216 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.652761 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-4m8jq"] Nov 21 13:51:11 crc kubenswrapper[4675]: W1121 13:51:11.672704 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32581066_5208_499f_8473_d7002fd31dca.slice/crio-c289d753dc5f651d3bbf26411dbc0ede2f630978a6ff5ead8b0e44ab8d67d490 WatchSource:0}: Error finding container c289d753dc5f651d3bbf26411dbc0ede2f630978a6ff5ead8b0e44ab8d67d490: Status 404 returned error can't find the container with id c289d753dc5f651d3bbf26411dbc0ede2f630978a6ff5ead8b0e44ab8d67d490 Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.805113 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.813222 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ad50a02-1502-4a0b-8f49-32988242ec6b-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-crd2x\" (UID: \"4ad50a02-1502-4a0b-8f49-32988242ec6b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.819357 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ad50a02-1502-4a0b-8f49-32988242ec6b-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-crd2x\" (UID: \"4ad50a02-1502-4a0b-8f49-32988242ec6b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.862290 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84b94b4484-zz6mx"] Nov 21 13:51:11 crc kubenswrapper[4675]: W1121 13:51:11.883077 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97705629_fb36_433b_9788_38401a60643b.slice/crio-1e2b5c68ffd203250c502944f66eca84125c4ae701c40d1cfe92fa72b9f3fb86 WatchSource:0}: Error finding container 1e2b5c68ffd203250c502944f66eca84125c4ae701c40d1cfe92fa72b9f3fb86: Status 404 returned error can't find the container with id 1e2b5c68ffd203250c502944f66eca84125c4ae701c40d1cfe92fa72b9f3fb86 Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.927780 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84b94b4484-zz6mx" event={"ID":"97705629-fb36-433b-9788-38401a60643b","Type":"ContainerStarted","Data":"1e2b5c68ffd203250c502944f66eca84125c4ae701c40d1cfe92fa72b9f3fb86"} Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.929221 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4m8jq" event={"ID":"32581066-5208-499f-8473-d7002fd31dca","Type":"ContainerStarted","Data":"c289d753dc5f651d3bbf26411dbc0ede2f630978a6ff5ead8b0e44ab8d67d490"} Nov 21 13:51:11 crc kubenswrapper[4675]: I1121 13:51:11.930012 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qvplj" event={"ID":"de0ab26e-1ac3-48eb-9647-f55c0249b9ec","Type":"ContainerStarted","Data":"acefe65714c630740d2c0246f9c52782354f6e2d71943d7160464cf94fba99e0"} Nov 21 13:51:12 crc kubenswrapper[4675]: I1121 13:51:12.007812 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" Nov 21 13:51:12 crc kubenswrapper[4675]: I1121 13:51:12.026508 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm"] Nov 21 13:51:12 crc kubenswrapper[4675]: I1121 13:51:12.465872 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x"] Nov 21 13:51:12 crc kubenswrapper[4675]: W1121 13:51:12.472272 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ad50a02_1502_4a0b_8f49_32988242ec6b.slice/crio-ff82d17c0852840c095b430d305fde5e3e9e29d5a2dc5dcb1405e7495121c127 WatchSource:0}: Error finding container ff82d17c0852840c095b430d305fde5e3e9e29d5a2dc5dcb1405e7495121c127: Status 404 returned error can't find the container with id ff82d17c0852840c095b430d305fde5e3e9e29d5a2dc5dcb1405e7495121c127 Nov 21 13:51:12 crc kubenswrapper[4675]: I1121 13:51:12.938497 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" event={"ID":"4ad50a02-1502-4a0b-8f49-32988242ec6b","Type":"ContainerStarted","Data":"ff82d17c0852840c095b430d305fde5e3e9e29d5a2dc5dcb1405e7495121c127"} Nov 21 13:51:12 crc kubenswrapper[4675]: I1121 13:51:12.939995 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm" event={"ID":"f304a655-1aaa-43a3-81c1-32e5214c02cf","Type":"ContainerStarted","Data":"568b582e14e25318b049426a5a7a1a251ecd151ab7683adfd5145091e1081eeb"} Nov 21 13:51:12 crc kubenswrapper[4675]: I1121 13:51:12.941395 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84b94b4484-zz6mx" event={"ID":"97705629-fb36-433b-9788-38401a60643b","Type":"ContainerStarted","Data":"c93d99a3f0108b969cd23a44c37fb14145d10fb4804f1382faf7e8ecf343b681"} Nov 21 13:51:14 crc kubenswrapper[4675]: I1121 13:51:14.881275 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-84b94b4484-zz6mx" podStartSLOduration=3.881258113 podStartE2EDuration="3.881258113s" podCreationTimestamp="2025-11-21 13:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:51:12.964032875 +0000 UTC m=+1149.690447592" watchObservedRunningTime="2025-11-21 13:51:14.881258113 +0000 UTC m=+1151.607672840" Nov 21 13:51:14 crc kubenswrapper[4675]: I1121 13:51:14.963175 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qvplj" event={"ID":"de0ab26e-1ac3-48eb-9647-f55c0249b9ec","Type":"ContainerStarted","Data":"a70a1625480071f1b67f085fba1b9f21f05c755d59f628dc9ee2eb691a464b12"} Nov 21 13:51:14 crc kubenswrapper[4675]: I1121 13:51:14.963511 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:14 crc kubenswrapper[4675]: I1121 13:51:14.969180 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4m8jq" event={"ID":"32581066-5208-499f-8473-d7002fd31dca","Type":"ContainerStarted","Data":"a8e0d9aa9657cc2b6a78ac0d3c006dbbb12848e37ccf978133c5d13e05834154"} Nov 21 13:51:14 crc kubenswrapper[4675]: I1121 13:51:14.970942 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm" event={"ID":"f304a655-1aaa-43a3-81c1-32e5214c02cf","Type":"ContainerStarted","Data":"e0b3a978f38cad18ffdb8ba551ac22d2f04fc36aabb8e30ae3fd5c196bb1c50f"} Nov 21 13:51:14 crc kubenswrapper[4675]: I1121 13:51:14.971209 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm" Nov 21 13:51:14 crc kubenswrapper[4675]: I1121 13:51:14.988050 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-qvplj" podStartSLOduration=2.30621611 podStartE2EDuration="4.98802583s" podCreationTimestamp="2025-11-21 13:51:10 +0000 UTC" firstStartedPulling="2025-11-21 13:51:11.293510359 +0000 UTC m=+1148.019925086" lastFinishedPulling="2025-11-21 13:51:13.975320079 +0000 UTC m=+1150.701734806" observedRunningTime="2025-11-21 13:51:14.976522448 +0000 UTC m=+1151.702937175" watchObservedRunningTime="2025-11-21 13:51:14.98802583 +0000 UTC m=+1151.714440557" Nov 21 13:51:14 crc kubenswrapper[4675]: I1121 13:51:14.996799 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm" podStartSLOduration=3.022716223 podStartE2EDuration="4.996781202s" podCreationTimestamp="2025-11-21 13:51:10 +0000 UTC" firstStartedPulling="2025-11-21 13:51:12.040379101 +0000 UTC m=+1148.766793818" lastFinishedPulling="2025-11-21 13:51:14.01444407 +0000 UTC m=+1150.740858797" observedRunningTime="2025-11-21 13:51:14.996219277 +0000 UTC m=+1151.722634004" watchObservedRunningTime="2025-11-21 13:51:14.996781202 +0000 UTC m=+1151.723195929" Nov 21 13:51:16 crc kubenswrapper[4675]: I1121 13:51:16.991222 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" event={"ID":"4ad50a02-1502-4a0b-8f49-32988242ec6b","Type":"ContainerStarted","Data":"e78e62ee1a31172593efc14223b417d3108119df9045dc28ee52e0ed125410a1"} Nov 21 13:51:17 crc kubenswrapper[4675]: I1121 13:51:17.011106 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-crd2x" podStartSLOduration=2.592653658 podStartE2EDuration="6.011088161s" podCreationTimestamp="2025-11-21 13:51:11 +0000 UTC" firstStartedPulling="2025-11-21 13:51:12.475585223 +0000 UTC m=+1149.201999950" lastFinishedPulling="2025-11-21 13:51:15.894019726 +0000 UTC m=+1152.620434453" observedRunningTime="2025-11-21 13:51:17.005460389 +0000 UTC m=+1153.731875116" watchObservedRunningTime="2025-11-21 13:51:17.011088161 +0000 UTC m=+1153.737502888" Nov 21 13:51:19 crc kubenswrapper[4675]: I1121 13:51:19.008598 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4m8jq" event={"ID":"32581066-5208-499f-8473-d7002fd31dca","Type":"ContainerStarted","Data":"b6591cdebd569ba60bfcb224d9db159ec0ca14354519ac52a568f76140cb1a7b"} Nov 21 13:51:19 crc kubenswrapper[4675]: I1121 13:51:19.036085 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4m8jq" podStartSLOduration=2.69007795 podStartE2EDuration="9.036045442s" podCreationTimestamp="2025-11-21 13:51:10 +0000 UTC" firstStartedPulling="2025-11-21 13:51:11.67504745 +0000 UTC m=+1148.401462177" lastFinishedPulling="2025-11-21 13:51:18.021014942 +0000 UTC m=+1154.747429669" observedRunningTime="2025-11-21 13:51:19.030899211 +0000 UTC m=+1155.757313958" watchObservedRunningTime="2025-11-21 13:51:19.036045442 +0000 UTC m=+1155.762460169" Nov 21 13:51:21 crc kubenswrapper[4675]: I1121 13:51:21.290780 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-qvplj" Nov 21 13:51:21 crc kubenswrapper[4675]: I1121 13:51:21.613217 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:21 crc kubenswrapper[4675]: I1121 13:51:21.613318 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:21 crc kubenswrapper[4675]: I1121 13:51:21.619288 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:22 crc kubenswrapper[4675]: I1121 13:51:22.034780 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:51:22 crc kubenswrapper[4675]: I1121 13:51:22.099468 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5cf4fc8679-kzt87"] Nov 21 13:51:31 crc kubenswrapper[4675]: I1121 13:51:31.810574 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-9ztsm" Nov 21 13:51:46 crc kubenswrapper[4675]: I1121 13:51:46.136692 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:51:46 crc kubenswrapper[4675]: I1121 13:51:46.137165 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.145057 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5cf4fc8679-kzt87" podUID="d155dc03-6e0a-4668-85e4-26b01f5df8c8" containerName="console" containerID="cri-o://334767b87ce464bbb0ca3ed6a63b5a056d417114de5c6f73521e06eca7a0cff4" gracePeriod=15 Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.557423 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5cf4fc8679-kzt87_d155dc03-6e0a-4668-85e4-26b01f5df8c8/console/0.log" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.557728 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.712461 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5xxw\" (UniqueName: \"kubernetes.io/projected/d155dc03-6e0a-4668-85e4-26b01f5df8c8-kube-api-access-t5xxw\") pod \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.712529 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-oauth-serving-cert\") pod \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.712578 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-service-ca\") pod \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.712659 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-trusted-ca-bundle\") pod \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.712710 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-serving-cert\") pod \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.712802 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-oauth-config\") pod \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.713425 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-service-ca" (OuterVolumeSpecName: "service-ca") pod "d155dc03-6e0a-4668-85e4-26b01f5df8c8" (UID: "d155dc03-6e0a-4668-85e4-26b01f5df8c8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.713492 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d155dc03-6e0a-4668-85e4-26b01f5df8c8" (UID: "d155dc03-6e0a-4668-85e4-26b01f5df8c8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.713559 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-config\") pod \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\" (UID: \"d155dc03-6e0a-4668-85e4-26b01f5df8c8\") " Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.713579 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d155dc03-6e0a-4668-85e4-26b01f5df8c8" (UID: "d155dc03-6e0a-4668-85e4-26b01f5df8c8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.713866 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-config" (OuterVolumeSpecName: "console-config") pod "d155dc03-6e0a-4668-85e4-26b01f5df8c8" (UID: "d155dc03-6e0a-4668-85e4-26b01f5df8c8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.714298 4675 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.714315 4675 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.714329 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.714341 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d155dc03-6e0a-4668-85e4-26b01f5df8c8-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.718761 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d155dc03-6e0a-4668-85e4-26b01f5df8c8" (UID: "d155dc03-6e0a-4668-85e4-26b01f5df8c8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.721313 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d155dc03-6e0a-4668-85e4-26b01f5df8c8-kube-api-access-t5xxw" (OuterVolumeSpecName: "kube-api-access-t5xxw") pod "d155dc03-6e0a-4668-85e4-26b01f5df8c8" (UID: "d155dc03-6e0a-4668-85e4-26b01f5df8c8"). InnerVolumeSpecName "kube-api-access-t5xxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.728301 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d155dc03-6e0a-4668-85e4-26b01f5df8c8" (UID: "d155dc03-6e0a-4668-85e4-26b01f5df8c8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.815745 4675 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.815785 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5xxw\" (UniqueName: \"kubernetes.io/projected/d155dc03-6e0a-4668-85e4-26b01f5df8c8-kube-api-access-t5xxw\") on node \"crc\" DevicePath \"\"" Nov 21 13:51:47 crc kubenswrapper[4675]: I1121 13:51:47.815795 4675 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d155dc03-6e0a-4668-85e4-26b01f5df8c8-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.224466 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5cf4fc8679-kzt87_d155dc03-6e0a-4668-85e4-26b01f5df8c8/console/0.log" Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.224837 4675 generic.go:334] "Generic (PLEG): container finished" podID="d155dc03-6e0a-4668-85e4-26b01f5df8c8" containerID="334767b87ce464bbb0ca3ed6a63b5a056d417114de5c6f73521e06eca7a0cff4" exitCode=2 Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.224870 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cf4fc8679-kzt87" event={"ID":"d155dc03-6e0a-4668-85e4-26b01f5df8c8","Type":"ContainerDied","Data":"334767b87ce464bbb0ca3ed6a63b5a056d417114de5c6f73521e06eca7a0cff4"} Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.224898 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cf4fc8679-kzt87" event={"ID":"d155dc03-6e0a-4668-85e4-26b01f5df8c8","Type":"ContainerDied","Data":"3fa089206e5e791627465bb44b5c4a886512c183b590e14f4f4adf9760ea9ec0"} Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.224917 4675 scope.go:117] "RemoveContainer" containerID="334767b87ce464bbb0ca3ed6a63b5a056d417114de5c6f73521e06eca7a0cff4" Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.225091 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cf4fc8679-kzt87" Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.252540 4675 scope.go:117] "RemoveContainer" containerID="334767b87ce464bbb0ca3ed6a63b5a056d417114de5c6f73521e06eca7a0cff4" Nov 21 13:51:48 crc kubenswrapper[4675]: E1121 13:51:48.253027 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"334767b87ce464bbb0ca3ed6a63b5a056d417114de5c6f73521e06eca7a0cff4\": container with ID starting with 334767b87ce464bbb0ca3ed6a63b5a056d417114de5c6f73521e06eca7a0cff4 not found: ID does not exist" containerID="334767b87ce464bbb0ca3ed6a63b5a056d417114de5c6f73521e06eca7a0cff4" Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.253093 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334767b87ce464bbb0ca3ed6a63b5a056d417114de5c6f73521e06eca7a0cff4"} err="failed to get container status \"334767b87ce464bbb0ca3ed6a63b5a056d417114de5c6f73521e06eca7a0cff4\": rpc error: code = NotFound desc = could not find container \"334767b87ce464bbb0ca3ed6a63b5a056d417114de5c6f73521e06eca7a0cff4\": container with ID starting with 334767b87ce464bbb0ca3ed6a63b5a056d417114de5c6f73521e06eca7a0cff4 not found: ID does not exist" Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.259710 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5cf4fc8679-kzt87"] Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.265236 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5cf4fc8679-kzt87"] Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.861680 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d155dc03-6e0a-4668-85e4-26b01f5df8c8" path="/var/lib/kubelet/pods/d155dc03-6e0a-4668-85e4-26b01f5df8c8/volumes" Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.864032 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8"] Nov 21 13:51:48 crc kubenswrapper[4675]: E1121 13:51:48.867632 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d155dc03-6e0a-4668-85e4-26b01f5df8c8" containerName="console" Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.867971 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d155dc03-6e0a-4668-85e4-26b01f5df8c8" containerName="console" Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.869340 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d155dc03-6e0a-4668-85e4-26b01f5df8c8" containerName="console" Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.870425 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.873200 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 13:51:48 crc kubenswrapper[4675]: I1121 13:51:48.874475 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8"] Nov 21 13:51:49 crc kubenswrapper[4675]: I1121 13:51:49.043500 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5500da3-4dcd-4802-86a8-f473843eebe4-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8\" (UID: \"f5500da3-4dcd-4802-86a8-f473843eebe4\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" Nov 21 13:51:49 crc kubenswrapper[4675]: I1121 13:51:49.043805 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ctmj\" (UniqueName: \"kubernetes.io/projected/f5500da3-4dcd-4802-86a8-f473843eebe4-kube-api-access-9ctmj\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8\" (UID: \"f5500da3-4dcd-4802-86a8-f473843eebe4\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" Nov 21 13:51:49 crc kubenswrapper[4675]: I1121 13:51:49.043951 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5500da3-4dcd-4802-86a8-f473843eebe4-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8\" (UID: \"f5500da3-4dcd-4802-86a8-f473843eebe4\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" Nov 21 13:51:49 crc kubenswrapper[4675]: I1121 13:51:49.145992 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5500da3-4dcd-4802-86a8-f473843eebe4-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8\" (UID: \"f5500da3-4dcd-4802-86a8-f473843eebe4\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" Nov 21 13:51:49 crc kubenswrapper[4675]: I1121 13:51:49.146104 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ctmj\" (UniqueName: \"kubernetes.io/projected/f5500da3-4dcd-4802-86a8-f473843eebe4-kube-api-access-9ctmj\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8\" (UID: \"f5500da3-4dcd-4802-86a8-f473843eebe4\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" Nov 21 13:51:49 crc kubenswrapper[4675]: I1121 13:51:49.146162 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5500da3-4dcd-4802-86a8-f473843eebe4-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8\" (UID: \"f5500da3-4dcd-4802-86a8-f473843eebe4\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" Nov 21 13:51:49 crc kubenswrapper[4675]: I1121 13:51:49.146614 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5500da3-4dcd-4802-86a8-f473843eebe4-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8\" (UID: \"f5500da3-4dcd-4802-86a8-f473843eebe4\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" Nov 21 13:51:49 crc kubenswrapper[4675]: I1121 13:51:49.146626 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5500da3-4dcd-4802-86a8-f473843eebe4-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8\" (UID: \"f5500da3-4dcd-4802-86a8-f473843eebe4\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" Nov 21 13:51:49 crc kubenswrapper[4675]: I1121 13:51:49.173666 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ctmj\" (UniqueName: \"kubernetes.io/projected/f5500da3-4dcd-4802-86a8-f473843eebe4-kube-api-access-9ctmj\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8\" (UID: \"f5500da3-4dcd-4802-86a8-f473843eebe4\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" Nov 21 13:51:49 crc kubenswrapper[4675]: I1121 13:51:49.187887 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" Nov 21 13:51:49 crc kubenswrapper[4675]: I1121 13:51:49.471458 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8"] Nov 21 13:51:50 crc kubenswrapper[4675]: I1121 13:51:50.244538 4675 generic.go:334] "Generic (PLEG): container finished" podID="f5500da3-4dcd-4802-86a8-f473843eebe4" containerID="c6dd342f9d1f78f143eff270c8292fc914ecf62ec57ec10fe1c8e98913697927" exitCode=0 Nov 21 13:51:50 crc kubenswrapper[4675]: I1121 13:51:50.244599 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" event={"ID":"f5500da3-4dcd-4802-86a8-f473843eebe4","Type":"ContainerDied","Data":"c6dd342f9d1f78f143eff270c8292fc914ecf62ec57ec10fe1c8e98913697927"} Nov 21 13:51:50 crc kubenswrapper[4675]: I1121 13:51:50.244882 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" event={"ID":"f5500da3-4dcd-4802-86a8-f473843eebe4","Type":"ContainerStarted","Data":"3b0082a421c64e5ec83c1ddba3bf19250d36452a84130faf1592afd4a2b9c47e"} Nov 21 13:51:50 crc kubenswrapper[4675]: I1121 13:51:50.246697 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 13:51:52 crc kubenswrapper[4675]: I1121 13:51:52.257955 4675 generic.go:334] "Generic (PLEG): container finished" podID="f5500da3-4dcd-4802-86a8-f473843eebe4" containerID="86a02e8ee5e22c2f2cfa2614fe2e9175cff74c82129c11452694062ac297f845" exitCode=0 Nov 21 13:51:52 crc kubenswrapper[4675]: I1121 13:51:52.258194 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" event={"ID":"f5500da3-4dcd-4802-86a8-f473843eebe4","Type":"ContainerDied","Data":"86a02e8ee5e22c2f2cfa2614fe2e9175cff74c82129c11452694062ac297f845"} Nov 21 13:51:53 crc kubenswrapper[4675]: I1121 13:51:53.270652 4675 generic.go:334] "Generic (PLEG): container finished" podID="f5500da3-4dcd-4802-86a8-f473843eebe4" containerID="0ff04939467f67d03fd1bfc5fbdc4136597fae836ef6f1b8af3f42ba1e33ec8e" exitCode=0 Nov 21 13:51:53 crc kubenswrapper[4675]: I1121 13:51:53.270714 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" event={"ID":"f5500da3-4dcd-4802-86a8-f473843eebe4","Type":"ContainerDied","Data":"0ff04939467f67d03fd1bfc5fbdc4136597fae836ef6f1b8af3f42ba1e33ec8e"} Nov 21 13:51:54 crc kubenswrapper[4675]: I1121 13:51:54.565603 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" Nov 21 13:51:54 crc kubenswrapper[4675]: I1121 13:51:54.738098 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ctmj\" (UniqueName: \"kubernetes.io/projected/f5500da3-4dcd-4802-86a8-f473843eebe4-kube-api-access-9ctmj\") pod \"f5500da3-4dcd-4802-86a8-f473843eebe4\" (UID: \"f5500da3-4dcd-4802-86a8-f473843eebe4\") " Nov 21 13:51:54 crc kubenswrapper[4675]: I1121 13:51:54.738345 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5500da3-4dcd-4802-86a8-f473843eebe4-util\") pod \"f5500da3-4dcd-4802-86a8-f473843eebe4\" (UID: \"f5500da3-4dcd-4802-86a8-f473843eebe4\") " Nov 21 13:51:54 crc kubenswrapper[4675]: I1121 13:51:54.738424 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5500da3-4dcd-4802-86a8-f473843eebe4-bundle\") pod \"f5500da3-4dcd-4802-86a8-f473843eebe4\" (UID: \"f5500da3-4dcd-4802-86a8-f473843eebe4\") " Nov 21 13:51:54 crc kubenswrapper[4675]: I1121 13:51:54.739763 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5500da3-4dcd-4802-86a8-f473843eebe4-bundle" (OuterVolumeSpecName: "bundle") pod "f5500da3-4dcd-4802-86a8-f473843eebe4" (UID: "f5500da3-4dcd-4802-86a8-f473843eebe4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:51:54 crc kubenswrapper[4675]: I1121 13:51:54.748411 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5500da3-4dcd-4802-86a8-f473843eebe4-kube-api-access-9ctmj" (OuterVolumeSpecName: "kube-api-access-9ctmj") pod "f5500da3-4dcd-4802-86a8-f473843eebe4" (UID: "f5500da3-4dcd-4802-86a8-f473843eebe4"). InnerVolumeSpecName "kube-api-access-9ctmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:51:54 crc kubenswrapper[4675]: I1121 13:51:54.840437 4675 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5500da3-4dcd-4802-86a8-f473843eebe4-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:51:54 crc kubenswrapper[4675]: I1121 13:51:54.840474 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ctmj\" (UniqueName: \"kubernetes.io/projected/f5500da3-4dcd-4802-86a8-f473843eebe4-kube-api-access-9ctmj\") on node \"crc\" DevicePath \"\"" Nov 21 13:51:55 crc kubenswrapper[4675]: I1121 13:51:55.288389 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" event={"ID":"f5500da3-4dcd-4802-86a8-f473843eebe4","Type":"ContainerDied","Data":"3b0082a421c64e5ec83c1ddba3bf19250d36452a84130faf1592afd4a2b9c47e"} Nov 21 13:51:55 crc kubenswrapper[4675]: I1121 13:51:55.288434 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b0082a421c64e5ec83c1ddba3bf19250d36452a84130faf1592afd4a2b9c47e" Nov 21 13:51:55 crc kubenswrapper[4675]: I1121 13:51:55.288524 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8" Nov 21 13:51:55 crc kubenswrapper[4675]: I1121 13:51:55.344832 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5500da3-4dcd-4802-86a8-f473843eebe4-util" (OuterVolumeSpecName: "util") pod "f5500da3-4dcd-4802-86a8-f473843eebe4" (UID: "f5500da3-4dcd-4802-86a8-f473843eebe4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:51:55 crc kubenswrapper[4675]: I1121 13:51:55.347507 4675 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5500da3-4dcd-4802-86a8-f473843eebe4-util\") on node \"crc\" DevicePath \"\"" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.789717 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x"] Nov 21 13:52:03 crc kubenswrapper[4675]: E1121 13:52:03.790612 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5500da3-4dcd-4802-86a8-f473843eebe4" containerName="extract" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.790629 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5500da3-4dcd-4802-86a8-f473843eebe4" containerName="extract" Nov 21 13:52:03 crc kubenswrapper[4675]: E1121 13:52:03.790644 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5500da3-4dcd-4802-86a8-f473843eebe4" containerName="pull" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.790651 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5500da3-4dcd-4802-86a8-f473843eebe4" containerName="pull" Nov 21 13:52:03 crc kubenswrapper[4675]: E1121 13:52:03.790674 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5500da3-4dcd-4802-86a8-f473843eebe4" containerName="util" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.790682 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5500da3-4dcd-4802-86a8-f473843eebe4" containerName="util" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.790850 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5500da3-4dcd-4802-86a8-f473843eebe4" containerName="extract" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.791583 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.794106 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.794695 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.795078 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.795085 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-plwk4" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.795252 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.806743 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x"] Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.874364 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-456ql\" (UniqueName: \"kubernetes.io/projected/c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd-kube-api-access-456ql\") pod \"metallb-operator-controller-manager-b6c7d7f4-57m9x\" (UID: \"c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd\") " pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.874441 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd-webhook-cert\") pod \"metallb-operator-controller-manager-b6c7d7f4-57m9x\" (UID: \"c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd\") " pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.874466 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd-apiservice-cert\") pod \"metallb-operator-controller-manager-b6c7d7f4-57m9x\" (UID: \"c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd\") " pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.975578 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-456ql\" (UniqueName: \"kubernetes.io/projected/c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd-kube-api-access-456ql\") pod \"metallb-operator-controller-manager-b6c7d7f4-57m9x\" (UID: \"c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd\") " pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.975950 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd-webhook-cert\") pod \"metallb-operator-controller-manager-b6c7d7f4-57m9x\" (UID: \"c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd\") " pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.975976 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd-apiservice-cert\") pod \"metallb-operator-controller-manager-b6c7d7f4-57m9x\" (UID: \"c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd\") " pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.983924 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd-webhook-cert\") pod \"metallb-operator-controller-manager-b6c7d7f4-57m9x\" (UID: \"c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd\") " pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" Nov 21 13:52:03 crc kubenswrapper[4675]: I1121 13:52:03.994891 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-456ql\" (UniqueName: \"kubernetes.io/projected/c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd-kube-api-access-456ql\") pod \"metallb-operator-controller-manager-b6c7d7f4-57m9x\" (UID: \"c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd\") " pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:03.999227 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd-apiservice-cert\") pod \"metallb-operator-controller-manager-b6c7d7f4-57m9x\" (UID: \"c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd\") " pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.111450 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.129242 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq"] Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.130166 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.132220 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.132535 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.132706 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7xkj5" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.159982 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq"] Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.281011 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21f16da1-dc0f-421b-b6f2-13c658268ae7-webhook-cert\") pod \"metallb-operator-webhook-server-6bbc7fcc74-d58sq\" (UID: \"21f16da1-dc0f-421b-b6f2-13c658268ae7\") " pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.281355 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21f16da1-dc0f-421b-b6f2-13c658268ae7-apiservice-cert\") pod \"metallb-operator-webhook-server-6bbc7fcc74-d58sq\" (UID: \"21f16da1-dc0f-421b-b6f2-13c658268ae7\") " pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.281438 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q44d\" (UniqueName: \"kubernetes.io/projected/21f16da1-dc0f-421b-b6f2-13c658268ae7-kube-api-access-5q44d\") pod \"metallb-operator-webhook-server-6bbc7fcc74-d58sq\" (UID: \"21f16da1-dc0f-421b-b6f2-13c658268ae7\") " pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.383124 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q44d\" (UniqueName: \"kubernetes.io/projected/21f16da1-dc0f-421b-b6f2-13c658268ae7-kube-api-access-5q44d\") pod \"metallb-operator-webhook-server-6bbc7fcc74-d58sq\" (UID: \"21f16da1-dc0f-421b-b6f2-13c658268ae7\") " pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.383258 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21f16da1-dc0f-421b-b6f2-13c658268ae7-webhook-cert\") pod \"metallb-operator-webhook-server-6bbc7fcc74-d58sq\" (UID: \"21f16da1-dc0f-421b-b6f2-13c658268ae7\") " pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.383297 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21f16da1-dc0f-421b-b6f2-13c658268ae7-apiservice-cert\") pod \"metallb-operator-webhook-server-6bbc7fcc74-d58sq\" (UID: \"21f16da1-dc0f-421b-b6f2-13c658268ae7\") " pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.390778 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21f16da1-dc0f-421b-b6f2-13c658268ae7-webhook-cert\") pod \"metallb-operator-webhook-server-6bbc7fcc74-d58sq\" (UID: \"21f16da1-dc0f-421b-b6f2-13c658268ae7\") " pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.392376 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21f16da1-dc0f-421b-b6f2-13c658268ae7-apiservice-cert\") pod \"metallb-operator-webhook-server-6bbc7fcc74-d58sq\" (UID: \"21f16da1-dc0f-421b-b6f2-13c658268ae7\") " pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.426264 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q44d\" (UniqueName: \"kubernetes.io/projected/21f16da1-dc0f-421b-b6f2-13c658268ae7-kube-api-access-5q44d\") pod \"metallb-operator-webhook-server-6bbc7fcc74-d58sq\" (UID: \"21f16da1-dc0f-421b-b6f2-13c658268ae7\") " pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.432377 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x"] Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.517605 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" Nov 21 13:52:04 crc kubenswrapper[4675]: I1121 13:52:04.945205 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq"] Nov 21 13:52:04 crc kubenswrapper[4675]: W1121 13:52:04.947613 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21f16da1_dc0f_421b_b6f2_13c658268ae7.slice/crio-a15090af0f89a12a37c4697c1e3770f814fe90dd486142f5d34466b815348b27 WatchSource:0}: Error finding container a15090af0f89a12a37c4697c1e3770f814fe90dd486142f5d34466b815348b27: Status 404 returned error can't find the container with id a15090af0f89a12a37c4697c1e3770f814fe90dd486142f5d34466b815348b27 Nov 21 13:52:05 crc kubenswrapper[4675]: I1121 13:52:05.363946 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" event={"ID":"21f16da1-dc0f-421b-b6f2-13c658268ae7","Type":"ContainerStarted","Data":"a15090af0f89a12a37c4697c1e3770f814fe90dd486142f5d34466b815348b27"} Nov 21 13:52:05 crc kubenswrapper[4675]: I1121 13:52:05.365011 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" event={"ID":"c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd","Type":"ContainerStarted","Data":"05ade84ec48ac67b476f7c51cd4bc925ec586fac36d791976bc28684c0b0182f"} Nov 21 13:52:12 crc kubenswrapper[4675]: I1121 13:52:12.421237 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" event={"ID":"21f16da1-dc0f-421b-b6f2-13c658268ae7","Type":"ContainerStarted","Data":"c77eb58839a063445a793c909b53ecd54d5c831930c1144593fc46b542d46d8d"} Nov 21 13:52:12 crc kubenswrapper[4675]: I1121 13:52:12.421825 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" Nov 21 13:52:12 crc kubenswrapper[4675]: I1121 13:52:12.423950 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" event={"ID":"c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd","Type":"ContainerStarted","Data":"7088f221774443dc373ca6e9701491c693fd0f7c3685a2ec3f07bade48deac6e"} Nov 21 13:52:12 crc kubenswrapper[4675]: I1121 13:52:12.424244 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" Nov 21 13:52:12 crc kubenswrapper[4675]: I1121 13:52:12.443232 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" podStartSLOduration=1.5991841789999999 podStartE2EDuration="8.443210827s" podCreationTimestamp="2025-11-21 13:52:04 +0000 UTC" firstStartedPulling="2025-11-21 13:52:04.950668945 +0000 UTC m=+1201.677083672" lastFinishedPulling="2025-11-21 13:52:11.794695583 +0000 UTC m=+1208.521110320" observedRunningTime="2025-11-21 13:52:12.440327567 +0000 UTC m=+1209.166742294" watchObservedRunningTime="2025-11-21 13:52:12.443210827 +0000 UTC m=+1209.169625554" Nov 21 13:52:12 crc kubenswrapper[4675]: I1121 13:52:12.467959 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" podStartSLOduration=2.140198574 podStartE2EDuration="9.467933556s" podCreationTimestamp="2025-11-21 13:52:03 +0000 UTC" firstStartedPulling="2025-11-21 13:52:04.441501085 +0000 UTC m=+1201.167915822" lastFinishedPulling="2025-11-21 13:52:11.769236077 +0000 UTC m=+1208.495650804" observedRunningTime="2025-11-21 13:52:12.462357571 +0000 UTC m=+1209.188772308" watchObservedRunningTime="2025-11-21 13:52:12.467933556 +0000 UTC m=+1209.194348293" Nov 21 13:52:16 crc kubenswrapper[4675]: I1121 13:52:16.136653 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:52:16 crc kubenswrapper[4675]: I1121 13:52:16.137036 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:52:24 crc kubenswrapper[4675]: I1121 13:52:24.526264 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6bbc7fcc74-d58sq" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.115731 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-b6c7d7f4-57m9x" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.840238 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-896kh"] Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.844008 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.846352 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.846640 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zn78t" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.853952 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.858656 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc"] Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.860898 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.867501 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.875731 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc"] Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.879001 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-metrics-certs\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.879060 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-298ph\" (UniqueName: \"kubernetes.io/projected/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-kube-api-access-298ph\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.879158 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-frr-conf\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.879242 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j5md\" (UniqueName: \"kubernetes.io/projected/046b1803-3201-4c23-bb9d-2cca261bdda0-kube-api-access-7j5md\") pod \"frr-k8s-webhook-server-6998585d5-p7bcc\" (UID: \"046b1803-3201-4c23-bb9d-2cca261bdda0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.879306 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-frr-startup\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.879331 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-frr-sockets\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.879351 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-metrics\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.879395 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-reloader\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.879425 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/046b1803-3201-4c23-bb9d-2cca261bdda0-cert\") pod \"frr-k8s-webhook-server-6998585d5-p7bcc\" (UID: \"046b1803-3201-4c23-bb9d-2cca261bdda0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.953405 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-plc79"] Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.954872 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-plc79" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.957126 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.957194 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.957339 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-8ddlp" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.957937 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.980488 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-298ph\" (UniqueName: \"kubernetes.io/projected/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-kube-api-access-298ph\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.980553 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-frr-conf\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.980627 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4phxm\" (UniqueName: \"kubernetes.io/projected/22f88730-5c3f-4c5d-a223-be8170e96588-kube-api-access-4phxm\") pod \"speaker-plc79\" (UID: \"22f88730-5c3f-4c5d-a223-be8170e96588\") " pod="metallb-system/speaker-plc79" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.980681 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j5md\" (UniqueName: \"kubernetes.io/projected/046b1803-3201-4c23-bb9d-2cca261bdda0-kube-api-access-7j5md\") pod \"frr-k8s-webhook-server-6998585d5-p7bcc\" (UID: \"046b1803-3201-4c23-bb9d-2cca261bdda0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.980703 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22f88730-5c3f-4c5d-a223-be8170e96588-metrics-certs\") pod \"speaker-plc79\" (UID: \"22f88730-5c3f-4c5d-a223-be8170e96588\") " pod="metallb-system/speaker-plc79" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.980752 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/22f88730-5c3f-4c5d-a223-be8170e96588-metallb-excludel2\") pod \"speaker-plc79\" (UID: \"22f88730-5c3f-4c5d-a223-be8170e96588\") " pod="metallb-system/speaker-plc79" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.980772 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-frr-startup\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.980797 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-frr-sockets\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.980820 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-metrics\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.980968 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-reloader\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.981014 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/046b1803-3201-4c23-bb9d-2cca261bdda0-cert\") pod \"frr-k8s-webhook-server-6998585d5-p7bcc\" (UID: \"046b1803-3201-4c23-bb9d-2cca261bdda0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.981096 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/22f88730-5c3f-4c5d-a223-be8170e96588-memberlist\") pod \"speaker-plc79\" (UID: \"22f88730-5c3f-4c5d-a223-be8170e96588\") " pod="metallb-system/speaker-plc79" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.981116 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-frr-conf\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.981135 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-metrics-certs\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: E1121 13:52:44.981209 4675 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.981221 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-metrics\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: E1121 13:52:44.981261 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/046b1803-3201-4c23-bb9d-2cca261bdda0-cert podName:046b1803-3201-4c23-bb9d-2cca261bdda0 nodeName:}" failed. No retries permitted until 2025-11-21 13:52:45.481243489 +0000 UTC m=+1242.207658216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/046b1803-3201-4c23-bb9d-2cca261bdda0-cert") pod "frr-k8s-webhook-server-6998585d5-p7bcc" (UID: "046b1803-3201-4c23-bb9d-2cca261bdda0") : secret "frr-k8s-webhook-server-cert" not found Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.981388 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-frr-sockets\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.981468 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-reloader\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.981987 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-frr-startup\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:44 crc kubenswrapper[4675]: I1121 13:52:44.989614 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-metrics-certs\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.011976 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-298ph\" (UniqueName: \"kubernetes.io/projected/afecd2d7-f280-48fd-b79e-eec3a7ee36f1-kube-api-access-298ph\") pod \"frr-k8s-896kh\" (UID: \"afecd2d7-f280-48fd-b79e-eec3a7ee36f1\") " pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.013823 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j5md\" (UniqueName: \"kubernetes.io/projected/046b1803-3201-4c23-bb9d-2cca261bdda0-kube-api-access-7j5md\") pod \"frr-k8s-webhook-server-6998585d5-p7bcc\" (UID: \"046b1803-3201-4c23-bb9d-2cca261bdda0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.065637 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-krt5f"] Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.069399 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-krt5f" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.074364 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.081573 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4phxm\" (UniqueName: \"kubernetes.io/projected/22f88730-5c3f-4c5d-a223-be8170e96588-kube-api-access-4phxm\") pod \"speaker-plc79\" (UID: \"22f88730-5c3f-4c5d-a223-be8170e96588\") " pod="metallb-system/speaker-plc79" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.081614 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22f88730-5c3f-4c5d-a223-be8170e96588-metrics-certs\") pod \"speaker-plc79\" (UID: \"22f88730-5c3f-4c5d-a223-be8170e96588\") " pod="metallb-system/speaker-plc79" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.081643 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b076ee09-1376-4f8e-a15f-0b42e2b163d2-cert\") pod \"controller-6c7b4b5f48-krt5f\" (UID: \"b076ee09-1376-4f8e-a15f-0b42e2b163d2\") " pod="metallb-system/controller-6c7b4b5f48-krt5f" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.081658 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b076ee09-1376-4f8e-a15f-0b42e2b163d2-metrics-certs\") pod \"controller-6c7b4b5f48-krt5f\" (UID: \"b076ee09-1376-4f8e-a15f-0b42e2b163d2\") " pod="metallb-system/controller-6c7b4b5f48-krt5f" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.081679 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/22f88730-5c3f-4c5d-a223-be8170e96588-metallb-excludel2\") pod \"speaker-plc79\" (UID: \"22f88730-5c3f-4c5d-a223-be8170e96588\") " pod="metallb-system/speaker-plc79" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.081695 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8nms\" (UniqueName: \"kubernetes.io/projected/b076ee09-1376-4f8e-a15f-0b42e2b163d2-kube-api-access-c8nms\") pod \"controller-6c7b4b5f48-krt5f\" (UID: \"b076ee09-1376-4f8e-a15f-0b42e2b163d2\") " pod="metallb-system/controller-6c7b4b5f48-krt5f" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.081761 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/22f88730-5c3f-4c5d-a223-be8170e96588-memberlist\") pod \"speaker-plc79\" (UID: \"22f88730-5c3f-4c5d-a223-be8170e96588\") " pod="metallb-system/speaker-plc79" Nov 21 13:52:45 crc kubenswrapper[4675]: E1121 13:52:45.081892 4675 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 21 13:52:45 crc kubenswrapper[4675]: E1121 13:52:45.081938 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22f88730-5c3f-4c5d-a223-be8170e96588-memberlist podName:22f88730-5c3f-4c5d-a223-be8170e96588 nodeName:}" failed. No retries permitted until 2025-11-21 13:52:45.581923467 +0000 UTC m=+1242.308338194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/22f88730-5c3f-4c5d-a223-be8170e96588-memberlist") pod "speaker-plc79" (UID: "22f88730-5c3f-4c5d-a223-be8170e96588") : secret "metallb-memberlist" not found Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.082762 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/22f88730-5c3f-4c5d-a223-be8170e96588-metallb-excludel2\") pod \"speaker-plc79\" (UID: \"22f88730-5c3f-4c5d-a223-be8170e96588\") " pod="metallb-system/speaker-plc79" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.085101 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-krt5f"] Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.086227 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22f88730-5c3f-4c5d-a223-be8170e96588-metrics-certs\") pod \"speaker-plc79\" (UID: \"22f88730-5c3f-4c5d-a223-be8170e96588\") " pod="metallb-system/speaker-plc79" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.108544 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4phxm\" (UniqueName: \"kubernetes.io/projected/22f88730-5c3f-4c5d-a223-be8170e96588-kube-api-access-4phxm\") pod \"speaker-plc79\" (UID: \"22f88730-5c3f-4c5d-a223-be8170e96588\") " pod="metallb-system/speaker-plc79" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.169921 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.184254 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b076ee09-1376-4f8e-a15f-0b42e2b163d2-cert\") pod \"controller-6c7b4b5f48-krt5f\" (UID: \"b076ee09-1376-4f8e-a15f-0b42e2b163d2\") " pod="metallb-system/controller-6c7b4b5f48-krt5f" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.184303 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b076ee09-1376-4f8e-a15f-0b42e2b163d2-metrics-certs\") pod \"controller-6c7b4b5f48-krt5f\" (UID: \"b076ee09-1376-4f8e-a15f-0b42e2b163d2\") " pod="metallb-system/controller-6c7b4b5f48-krt5f" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.184329 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8nms\" (UniqueName: \"kubernetes.io/projected/b076ee09-1376-4f8e-a15f-0b42e2b163d2-kube-api-access-c8nms\") pod \"controller-6c7b4b5f48-krt5f\" (UID: \"b076ee09-1376-4f8e-a15f-0b42e2b163d2\") " pod="metallb-system/controller-6c7b4b5f48-krt5f" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.187333 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.189212 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b076ee09-1376-4f8e-a15f-0b42e2b163d2-metrics-certs\") pod \"controller-6c7b4b5f48-krt5f\" (UID: \"b076ee09-1376-4f8e-a15f-0b42e2b163d2\") " pod="metallb-system/controller-6c7b4b5f48-krt5f" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.197371 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b076ee09-1376-4f8e-a15f-0b42e2b163d2-cert\") pod \"controller-6c7b4b5f48-krt5f\" (UID: \"b076ee09-1376-4f8e-a15f-0b42e2b163d2\") " pod="metallb-system/controller-6c7b4b5f48-krt5f" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.205778 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8nms\" (UniqueName: \"kubernetes.io/projected/b076ee09-1376-4f8e-a15f-0b42e2b163d2-kube-api-access-c8nms\") pod \"controller-6c7b4b5f48-krt5f\" (UID: \"b076ee09-1376-4f8e-a15f-0b42e2b163d2\") " pod="metallb-system/controller-6c7b4b5f48-krt5f" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.385496 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-krt5f" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.488024 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/046b1803-3201-4c23-bb9d-2cca261bdda0-cert\") pod \"frr-k8s-webhook-server-6998585d5-p7bcc\" (UID: \"046b1803-3201-4c23-bb9d-2cca261bdda0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.491616 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/046b1803-3201-4c23-bb9d-2cca261bdda0-cert\") pod \"frr-k8s-webhook-server-6998585d5-p7bcc\" (UID: \"046b1803-3201-4c23-bb9d-2cca261bdda0\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.589663 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/22f88730-5c3f-4c5d-a223-be8170e96588-memberlist\") pod \"speaker-plc79\" (UID: \"22f88730-5c3f-4c5d-a223-be8170e96588\") " pod="metallb-system/speaker-plc79" Nov 21 13:52:45 crc kubenswrapper[4675]: E1121 13:52:45.589964 4675 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 21 13:52:45 crc kubenswrapper[4675]: E1121 13:52:45.590027 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22f88730-5c3f-4c5d-a223-be8170e96588-memberlist podName:22f88730-5c3f-4c5d-a223-be8170e96588 nodeName:}" failed. No retries permitted until 2025-11-21 13:52:46.59000872 +0000 UTC m=+1243.316423447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/22f88730-5c3f-4c5d-a223-be8170e96588-memberlist") pod "speaker-plc79" (UID: "22f88730-5c3f-4c5d-a223-be8170e96588") : secret "metallb-memberlist" not found Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.660182 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-896kh" event={"ID":"afecd2d7-f280-48fd-b79e-eec3a7ee36f1","Type":"ContainerStarted","Data":"71366b0b777dba23e0395feb9fe146f26e57881c449c850282b0edbe6a796631"} Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.782084 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc" Nov 21 13:52:45 crc kubenswrapper[4675]: I1121 13:52:45.850700 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-krt5f"] Nov 21 13:52:45 crc kubenswrapper[4675]: W1121 13:52:45.866625 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb076ee09_1376_4f8e_a15f_0b42e2b163d2.slice/crio-7f42d1d7684689f42ed105593c1754ccc07c83c43bf319aba3c6b592a76bf758 WatchSource:0}: Error finding container 7f42d1d7684689f42ed105593c1754ccc07c83c43bf319aba3c6b592a76bf758: Status 404 returned error can't find the container with id 7f42d1d7684689f42ed105593c1754ccc07c83c43bf319aba3c6b592a76bf758 Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.136372 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.136428 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.136476 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.137206 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd8ece146e7469ff47abc44df983434b24140bc8b8a19319d303006a9e5badd2"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.137271 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://fd8ece146e7469ff47abc44df983434b24140bc8b8a19319d303006a9e5badd2" gracePeriod=600 Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.187989 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc"] Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.606362 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/22f88730-5c3f-4c5d-a223-be8170e96588-memberlist\") pod \"speaker-plc79\" (UID: \"22f88730-5c3f-4c5d-a223-be8170e96588\") " pod="metallb-system/speaker-plc79" Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.612559 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/22f88730-5c3f-4c5d-a223-be8170e96588-memberlist\") pod \"speaker-plc79\" (UID: \"22f88730-5c3f-4c5d-a223-be8170e96588\") " pod="metallb-system/speaker-plc79" Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.696330 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-krt5f" event={"ID":"b076ee09-1376-4f8e-a15f-0b42e2b163d2","Type":"ContainerStarted","Data":"c3d650a6e237016b2c29421b7216d6792541e46468fc90c5fef098ce11cd04f4"} Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.696671 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-krt5f" event={"ID":"b076ee09-1376-4f8e-a15f-0b42e2b163d2","Type":"ContainerStarted","Data":"f10f1491d5c90e14d360537cdacf3dca1f92254af61f69feb3f7d5c64744102f"} Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.696682 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-krt5f" event={"ID":"b076ee09-1376-4f8e-a15f-0b42e2b163d2","Type":"ContainerStarted","Data":"7f42d1d7684689f42ed105593c1754ccc07c83c43bf319aba3c6b592a76bf758"} Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.697901 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-krt5f" Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.709740 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc" event={"ID":"046b1803-3201-4c23-bb9d-2cca261bdda0","Type":"ContainerStarted","Data":"0fbce33582f97de63894958dec39c1fd342b15b53bd2b091e52996792fd2ada3"} Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.733816 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-krt5f" podStartSLOduration=1.733790366 podStartE2EDuration="1.733790366s" podCreationTimestamp="2025-11-21 13:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:52:46.730472766 +0000 UTC m=+1243.456887493" watchObservedRunningTime="2025-11-21 13:52:46.733790366 +0000 UTC m=+1243.460205093" Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.746886 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="fd8ece146e7469ff47abc44df983434b24140bc8b8a19319d303006a9e5badd2" exitCode=0 Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.746966 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"fd8ece146e7469ff47abc44df983434b24140bc8b8a19319d303006a9e5badd2"} Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.747010 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"4f2f7ddee4baba66416eb7233c361ee3ddc2444a945155131226bb7f36fc9024"} Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.747032 4675 scope.go:117] "RemoveContainer" containerID="e4478a9785c2c0cd8603759bbdd163dd836f7c97363478e7200b2c21e3d3682a" Nov 21 13:52:46 crc kubenswrapper[4675]: I1121 13:52:46.772823 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-plc79" Nov 21 13:52:47 crc kubenswrapper[4675]: I1121 13:52:47.762587 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-plc79" event={"ID":"22f88730-5c3f-4c5d-a223-be8170e96588","Type":"ContainerStarted","Data":"7b0da81595171ac36f4a330b983a75835027352b07a4b0110d2d31ae35d7c876"} Nov 21 13:52:47 crc kubenswrapper[4675]: I1121 13:52:47.763302 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-plc79" event={"ID":"22f88730-5c3f-4c5d-a223-be8170e96588","Type":"ContainerStarted","Data":"fc6df24c13f79eed0d4df008917f73de0efde6a0fe72be996f46e36c9284e4d9"} Nov 21 13:52:47 crc kubenswrapper[4675]: I1121 13:52:47.763319 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-plc79" event={"ID":"22f88730-5c3f-4c5d-a223-be8170e96588","Type":"ContainerStarted","Data":"dae4a9300bf34c452dcc034930e1d79bfd3fa2dfbe16eb34ef60a12ddd4121c6"} Nov 21 13:52:47 crc kubenswrapper[4675]: I1121 13:52:47.764277 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-plc79" Nov 21 13:52:54 crc kubenswrapper[4675]: I1121 13:52:54.827157 4675 generic.go:334] "Generic (PLEG): container finished" podID="afecd2d7-f280-48fd-b79e-eec3a7ee36f1" containerID="59c168cd38ac37d3b116c7dc2b4527d6c0ec1120610674829ce2867f7fc4baac" exitCode=0 Nov 21 13:52:54 crc kubenswrapper[4675]: I1121 13:52:54.827254 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-896kh" event={"ID":"afecd2d7-f280-48fd-b79e-eec3a7ee36f1","Type":"ContainerDied","Data":"59c168cd38ac37d3b116c7dc2b4527d6c0ec1120610674829ce2867f7fc4baac"} Nov 21 13:52:54 crc kubenswrapper[4675]: I1121 13:52:54.832350 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc" event={"ID":"046b1803-3201-4c23-bb9d-2cca261bdda0","Type":"ContainerStarted","Data":"e7fcbfc66b67ed9201989e4df8f569c0d4c85c15623f53e9034b5362884c0557"} Nov 21 13:52:54 crc kubenswrapper[4675]: I1121 13:52:54.833288 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc" Nov 21 13:52:54 crc kubenswrapper[4675]: I1121 13:52:54.870892 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-plc79" podStartSLOduration=10.870865296 podStartE2EDuration="10.870865296s" podCreationTimestamp="2025-11-21 13:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:52:47.785684348 +0000 UTC m=+1244.512099105" watchObservedRunningTime="2025-11-21 13:52:54.870865296 +0000 UTC m=+1251.597280063" Nov 21 13:52:54 crc kubenswrapper[4675]: I1121 13:52:54.897394 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc" podStartSLOduration=3.136548099 podStartE2EDuration="10.897372148s" podCreationTimestamp="2025-11-21 13:52:44 +0000 UTC" firstStartedPulling="2025-11-21 13:52:46.198548455 +0000 UTC m=+1242.924963182" lastFinishedPulling="2025-11-21 13:52:53.959372504 +0000 UTC m=+1250.685787231" observedRunningTime="2025-11-21 13:52:54.892996202 +0000 UTC m=+1251.619410939" watchObservedRunningTime="2025-11-21 13:52:54.897372148 +0000 UTC m=+1251.623786885" Nov 21 13:52:55 crc kubenswrapper[4675]: I1121 13:52:55.846724 4675 generic.go:334] "Generic (PLEG): container finished" podID="afecd2d7-f280-48fd-b79e-eec3a7ee36f1" containerID="3e2aedf9f5d1eb16cbb7b96cb099da46a838fd770cf823fe888dd08050667116" exitCode=0 Nov 21 13:52:55 crc kubenswrapper[4675]: I1121 13:52:55.846846 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-896kh" event={"ID":"afecd2d7-f280-48fd-b79e-eec3a7ee36f1","Type":"ContainerDied","Data":"3e2aedf9f5d1eb16cbb7b96cb099da46a838fd770cf823fe888dd08050667116"} Nov 21 13:52:56 crc kubenswrapper[4675]: I1121 13:52:56.860559 4675 generic.go:334] "Generic (PLEG): container finished" podID="afecd2d7-f280-48fd-b79e-eec3a7ee36f1" containerID="30396c41e192eb2bc6ee0c27ea183e08a5dbdd7681344093cfaf1a2bdfdde44b" exitCode=0 Nov 21 13:52:56 crc kubenswrapper[4675]: I1121 13:52:56.860642 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-896kh" event={"ID":"afecd2d7-f280-48fd-b79e-eec3a7ee36f1","Type":"ContainerDied","Data":"30396c41e192eb2bc6ee0c27ea183e08a5dbdd7681344093cfaf1a2bdfdde44b"} Nov 21 13:52:57 crc kubenswrapper[4675]: I1121 13:52:57.872584 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-896kh" event={"ID":"afecd2d7-f280-48fd-b79e-eec3a7ee36f1","Type":"ContainerStarted","Data":"788f555f94f19162fc2d6cb7abcfe723a84b181cdfca70362a196948fc68746f"} Nov 21 13:52:57 crc kubenswrapper[4675]: I1121 13:52:57.872954 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-896kh" event={"ID":"afecd2d7-f280-48fd-b79e-eec3a7ee36f1","Type":"ContainerStarted","Data":"4842bc69cd8710b853128864bbf13aff59ce3943bc37ca7b296edb8d4e7861fa"} Nov 21 13:52:57 crc kubenswrapper[4675]: I1121 13:52:57.872970 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-896kh" event={"ID":"afecd2d7-f280-48fd-b79e-eec3a7ee36f1","Type":"ContainerStarted","Data":"bc9e08844b95419e53322e6fa33aa9c2696a8975e98cc3d97f4785b702e977d7"} Nov 21 13:52:57 crc kubenswrapper[4675]: I1121 13:52:57.872982 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-896kh" event={"ID":"afecd2d7-f280-48fd-b79e-eec3a7ee36f1","Type":"ContainerStarted","Data":"de93ce5c8fd6b2c6eb8696cd99accc9484c3058eb19ad50f41485775d8b32aeb"} Nov 21 13:52:57 crc kubenswrapper[4675]: I1121 13:52:57.872993 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-896kh" event={"ID":"afecd2d7-f280-48fd-b79e-eec3a7ee36f1","Type":"ContainerStarted","Data":"d7de1b221372e7b2734030c678988a83baec5f9d66de3042c7fc8c2b74b32322"} Nov 21 13:52:58 crc kubenswrapper[4675]: I1121 13:52:58.883481 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-896kh" event={"ID":"afecd2d7-f280-48fd-b79e-eec3a7ee36f1","Type":"ContainerStarted","Data":"22388201c590709e206ed169b291e709a42b2f692640b32f2dfe68a955040d4d"} Nov 21 13:52:58 crc kubenswrapper[4675]: I1121 13:52:58.883886 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-896kh" Nov 21 13:52:58 crc kubenswrapper[4675]: I1121 13:52:58.909726 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-896kh" podStartSLOduration=6.593352157 podStartE2EDuration="14.909710637s" podCreationTimestamp="2025-11-21 13:52:44 +0000 UTC" firstStartedPulling="2025-11-21 13:52:45.617878855 +0000 UTC m=+1242.344293582" lastFinishedPulling="2025-11-21 13:52:53.934237335 +0000 UTC m=+1250.660652062" observedRunningTime="2025-11-21 13:52:58.904241165 +0000 UTC m=+1255.630655912" watchObservedRunningTime="2025-11-21 13:52:58.909710637 +0000 UTC m=+1255.636125364" Nov 21 13:53:00 crc kubenswrapper[4675]: I1121 13:53:00.171227 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-896kh" Nov 21 13:53:00 crc kubenswrapper[4675]: I1121 13:53:00.208505 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-896kh" Nov 21 13:53:05 crc kubenswrapper[4675]: I1121 13:53:05.391380 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-krt5f" Nov 21 13:53:05 crc kubenswrapper[4675]: I1121 13:53:05.789451 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-p7bcc" Nov 21 13:53:06 crc kubenswrapper[4675]: I1121 13:53:06.778552 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-plc79" Nov 21 13:53:09 crc kubenswrapper[4675]: I1121 13:53:09.646347 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cnlw4"] Nov 21 13:53:09 crc kubenswrapper[4675]: I1121 13:53:09.647566 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cnlw4" Nov 21 13:53:09 crc kubenswrapper[4675]: I1121 13:53:09.649434 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-2zjh4" Nov 21 13:53:09 crc kubenswrapper[4675]: I1121 13:53:09.650141 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 21 13:53:09 crc kubenswrapper[4675]: I1121 13:53:09.651057 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 21 13:53:09 crc kubenswrapper[4675]: I1121 13:53:09.655309 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cnlw4"] Nov 21 13:53:09 crc kubenswrapper[4675]: I1121 13:53:09.798621 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lvrq\" (UniqueName: \"kubernetes.io/projected/dec16a65-ad25-4b92-8981-f5bcb74610af-kube-api-access-5lvrq\") pod \"openstack-operator-index-cnlw4\" (UID: \"dec16a65-ad25-4b92-8981-f5bcb74610af\") " pod="openstack-operators/openstack-operator-index-cnlw4" Nov 21 13:53:09 crc kubenswrapper[4675]: I1121 13:53:09.900336 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lvrq\" (UniqueName: \"kubernetes.io/projected/dec16a65-ad25-4b92-8981-f5bcb74610af-kube-api-access-5lvrq\") pod \"openstack-operator-index-cnlw4\" (UID: \"dec16a65-ad25-4b92-8981-f5bcb74610af\") " pod="openstack-operators/openstack-operator-index-cnlw4" Nov 21 13:53:09 crc kubenswrapper[4675]: I1121 13:53:09.918589 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lvrq\" (UniqueName: \"kubernetes.io/projected/dec16a65-ad25-4b92-8981-f5bcb74610af-kube-api-access-5lvrq\") pod \"openstack-operator-index-cnlw4\" (UID: \"dec16a65-ad25-4b92-8981-f5bcb74610af\") " pod="openstack-operators/openstack-operator-index-cnlw4" Nov 21 13:53:09 crc kubenswrapper[4675]: I1121 13:53:09.962689 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cnlw4" Nov 21 13:53:10 crc kubenswrapper[4675]: I1121 13:53:10.407854 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cnlw4"] Nov 21 13:53:10 crc kubenswrapper[4675]: I1121 13:53:10.995564 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cnlw4" event={"ID":"dec16a65-ad25-4b92-8981-f5bcb74610af","Type":"ContainerStarted","Data":"86475d7755e1a85ab0bec10a960faf6637bda5a5369adc7ef59da14ffb2f7279"} Nov 21 13:53:13 crc kubenswrapper[4675]: I1121 13:53:13.219208 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cnlw4"] Nov 21 13:53:13 crc kubenswrapper[4675]: I1121 13:53:13.825227 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-4lmmn"] Nov 21 13:53:13 crc kubenswrapper[4675]: I1121 13:53:13.827128 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4lmmn" Nov 21 13:53:13 crc kubenswrapper[4675]: I1121 13:53:13.834035 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4lmmn"] Nov 21 13:53:13 crc kubenswrapper[4675]: I1121 13:53:13.967641 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l59nj\" (UniqueName: \"kubernetes.io/projected/d570523a-f2e0-4913-a405-ac5b8582b059-kube-api-access-l59nj\") pod \"openstack-operator-index-4lmmn\" (UID: \"d570523a-f2e0-4913-a405-ac5b8582b059\") " pod="openstack-operators/openstack-operator-index-4lmmn" Nov 21 13:53:14 crc kubenswrapper[4675]: I1121 13:53:14.069198 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l59nj\" (UniqueName: \"kubernetes.io/projected/d570523a-f2e0-4913-a405-ac5b8582b059-kube-api-access-l59nj\") pod \"openstack-operator-index-4lmmn\" (UID: \"d570523a-f2e0-4913-a405-ac5b8582b059\") " pod="openstack-operators/openstack-operator-index-4lmmn" Nov 21 13:53:14 crc kubenswrapper[4675]: I1121 13:53:14.089345 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l59nj\" (UniqueName: \"kubernetes.io/projected/d570523a-f2e0-4913-a405-ac5b8582b059-kube-api-access-l59nj\") pod \"openstack-operator-index-4lmmn\" (UID: \"d570523a-f2e0-4913-a405-ac5b8582b059\") " pod="openstack-operators/openstack-operator-index-4lmmn" Nov 21 13:53:14 crc kubenswrapper[4675]: I1121 13:53:14.149565 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4lmmn" Nov 21 13:53:14 crc kubenswrapper[4675]: I1121 13:53:14.600968 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4lmmn"] Nov 21 13:53:14 crc kubenswrapper[4675]: W1121 13:53:14.603553 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd570523a_f2e0_4913_a405_ac5b8582b059.slice/crio-d2e46f750e90c53bc313bd0dd55269693dbfccaeff7a9b9cd1efbf5a45dd8ac6 WatchSource:0}: Error finding container d2e46f750e90c53bc313bd0dd55269693dbfccaeff7a9b9cd1efbf5a45dd8ac6: Status 404 returned error can't find the container with id d2e46f750e90c53bc313bd0dd55269693dbfccaeff7a9b9cd1efbf5a45dd8ac6 Nov 21 13:53:15 crc kubenswrapper[4675]: I1121 13:53:15.031904 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4lmmn" event={"ID":"d570523a-f2e0-4913-a405-ac5b8582b059","Type":"ContainerStarted","Data":"d2e46f750e90c53bc313bd0dd55269693dbfccaeff7a9b9cd1efbf5a45dd8ac6"} Nov 21 13:53:15 crc kubenswrapper[4675]: I1121 13:53:15.173674 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-896kh" Nov 21 13:53:24 crc kubenswrapper[4675]: I1121 13:53:24.107611 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cnlw4" event={"ID":"dec16a65-ad25-4b92-8981-f5bcb74610af","Type":"ContainerStarted","Data":"92a0771f7dbb9d922dda4abe73aeb70e552331b6efcd62c0f720d8a673d13bcf"} Nov 21 13:53:24 crc kubenswrapper[4675]: I1121 13:53:24.107678 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-cnlw4" podUID="dec16a65-ad25-4b92-8981-f5bcb74610af" containerName="registry-server" containerID="cri-o://92a0771f7dbb9d922dda4abe73aeb70e552331b6efcd62c0f720d8a673d13bcf" gracePeriod=2 Nov 21 13:53:24 crc kubenswrapper[4675]: I1121 13:53:24.109661 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4lmmn" event={"ID":"d570523a-f2e0-4913-a405-ac5b8582b059","Type":"ContainerStarted","Data":"750e4917609954f75d6b0c88e4216b020afa6c9dd67c938115b48736bead9a71"} Nov 21 13:53:24 crc kubenswrapper[4675]: I1121 13:53:24.135682 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cnlw4" podStartSLOduration=2.145241553 podStartE2EDuration="15.135659456s" podCreationTimestamp="2025-11-21 13:53:09 +0000 UTC" firstStartedPulling="2025-11-21 13:53:10.410285556 +0000 UTC m=+1267.136700283" lastFinishedPulling="2025-11-21 13:53:23.400703459 +0000 UTC m=+1280.127118186" observedRunningTime="2025-11-21 13:53:24.123754888 +0000 UTC m=+1280.850169635" watchObservedRunningTime="2025-11-21 13:53:24.135659456 +0000 UTC m=+1280.862074183" Nov 21 13:53:24 crc kubenswrapper[4675]: I1121 13:53:24.149858 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-4lmmn" Nov 21 13:53:24 crc kubenswrapper[4675]: I1121 13:53:24.149919 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-4lmmn" Nov 21 13:53:24 crc kubenswrapper[4675]: I1121 13:53:24.152004 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-4lmmn" podStartSLOduration=2.358076037 podStartE2EDuration="11.151991232s" podCreationTimestamp="2025-11-21 13:53:13 +0000 UTC" firstStartedPulling="2025-11-21 13:53:14.607692286 +0000 UTC m=+1271.334107013" lastFinishedPulling="2025-11-21 13:53:23.401607481 +0000 UTC m=+1280.128022208" observedRunningTime="2025-11-21 13:53:24.141802595 +0000 UTC m=+1280.868217332" watchObservedRunningTime="2025-11-21 13:53:24.151991232 +0000 UTC m=+1280.878405959" Nov 21 13:53:24 crc kubenswrapper[4675]: I1121 13:53:24.179279 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-4lmmn" Nov 21 13:53:24 crc kubenswrapper[4675]: I1121 13:53:24.643122 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cnlw4" Nov 21 13:53:24 crc kubenswrapper[4675]: I1121 13:53:24.777753 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lvrq\" (UniqueName: \"kubernetes.io/projected/dec16a65-ad25-4b92-8981-f5bcb74610af-kube-api-access-5lvrq\") pod \"dec16a65-ad25-4b92-8981-f5bcb74610af\" (UID: \"dec16a65-ad25-4b92-8981-f5bcb74610af\") " Nov 21 13:53:24 crc kubenswrapper[4675]: I1121 13:53:24.783502 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec16a65-ad25-4b92-8981-f5bcb74610af-kube-api-access-5lvrq" (OuterVolumeSpecName: "kube-api-access-5lvrq") pod "dec16a65-ad25-4b92-8981-f5bcb74610af" (UID: "dec16a65-ad25-4b92-8981-f5bcb74610af"). InnerVolumeSpecName "kube-api-access-5lvrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:53:24 crc kubenswrapper[4675]: I1121 13:53:24.880493 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lvrq\" (UniqueName: \"kubernetes.io/projected/dec16a65-ad25-4b92-8981-f5bcb74610af-kube-api-access-5lvrq\") on node \"crc\" DevicePath \"\"" Nov 21 13:53:25 crc kubenswrapper[4675]: I1121 13:53:25.120855 4675 generic.go:334] "Generic (PLEG): container finished" podID="dec16a65-ad25-4b92-8981-f5bcb74610af" containerID="92a0771f7dbb9d922dda4abe73aeb70e552331b6efcd62c0f720d8a673d13bcf" exitCode=0 Nov 21 13:53:25 crc kubenswrapper[4675]: I1121 13:53:25.120913 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cnlw4" Nov 21 13:53:25 crc kubenswrapper[4675]: I1121 13:53:25.120922 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cnlw4" event={"ID":"dec16a65-ad25-4b92-8981-f5bcb74610af","Type":"ContainerDied","Data":"92a0771f7dbb9d922dda4abe73aeb70e552331b6efcd62c0f720d8a673d13bcf"} Nov 21 13:53:25 crc kubenswrapper[4675]: I1121 13:53:25.120967 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cnlw4" event={"ID":"dec16a65-ad25-4b92-8981-f5bcb74610af","Type":"ContainerDied","Data":"86475d7755e1a85ab0bec10a960faf6637bda5a5369adc7ef59da14ffb2f7279"} Nov 21 13:53:25 crc kubenswrapper[4675]: I1121 13:53:25.120986 4675 scope.go:117] "RemoveContainer" containerID="92a0771f7dbb9d922dda4abe73aeb70e552331b6efcd62c0f720d8a673d13bcf" Nov 21 13:53:25 crc kubenswrapper[4675]: I1121 13:53:25.144663 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cnlw4"] Nov 21 13:53:25 crc kubenswrapper[4675]: I1121 13:53:25.147871 4675 scope.go:117] "RemoveContainer" containerID="92a0771f7dbb9d922dda4abe73aeb70e552331b6efcd62c0f720d8a673d13bcf" Nov 21 13:53:25 crc kubenswrapper[4675]: E1121 13:53:25.148277 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92a0771f7dbb9d922dda4abe73aeb70e552331b6efcd62c0f720d8a673d13bcf\": container with ID starting with 92a0771f7dbb9d922dda4abe73aeb70e552331b6efcd62c0f720d8a673d13bcf not found: ID does not exist" containerID="92a0771f7dbb9d922dda4abe73aeb70e552331b6efcd62c0f720d8a673d13bcf" Nov 21 13:53:25 crc kubenswrapper[4675]: I1121 13:53:25.148343 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92a0771f7dbb9d922dda4abe73aeb70e552331b6efcd62c0f720d8a673d13bcf"} err="failed to get container status \"92a0771f7dbb9d922dda4abe73aeb70e552331b6efcd62c0f720d8a673d13bcf\": rpc error: code = NotFound desc = could not find container \"92a0771f7dbb9d922dda4abe73aeb70e552331b6efcd62c0f720d8a673d13bcf\": container with ID starting with 92a0771f7dbb9d922dda4abe73aeb70e552331b6efcd62c0f720d8a673d13bcf not found: ID does not exist" Nov 21 13:53:25 crc kubenswrapper[4675]: I1121 13:53:25.149597 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-cnlw4"] Nov 21 13:53:26 crc kubenswrapper[4675]: I1121 13:53:26.859222 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec16a65-ad25-4b92-8981-f5bcb74610af" path="/var/lib/kubelet/pods/dec16a65-ad25-4b92-8981-f5bcb74610af/volumes" Nov 21 13:53:34 crc kubenswrapper[4675]: I1121 13:53:34.189608 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-4lmmn" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.335180 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n"] Nov 21 13:53:43 crc kubenswrapper[4675]: E1121 13:53:43.336032 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec16a65-ad25-4b92-8981-f5bcb74610af" containerName="registry-server" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.336046 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec16a65-ad25-4b92-8981-f5bcb74610af" containerName="registry-server" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.336258 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec16a65-ad25-4b92-8981-f5bcb74610af" containerName="registry-server" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.337435 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.340444 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-v4v77" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.352273 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n"] Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.404409 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w44x\" (UniqueName: \"kubernetes.io/projected/4ebf20ac-e131-4e83-8493-aab35b1f206a-kube-api-access-9w44x\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n\" (UID: \"4ebf20ac-e131-4e83-8493-aab35b1f206a\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.404700 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ebf20ac-e131-4e83-8493-aab35b1f206a-bundle\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n\" (UID: \"4ebf20ac-e131-4e83-8493-aab35b1f206a\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.404802 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ebf20ac-e131-4e83-8493-aab35b1f206a-util\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n\" (UID: \"4ebf20ac-e131-4e83-8493-aab35b1f206a\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.506796 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w44x\" (UniqueName: \"kubernetes.io/projected/4ebf20ac-e131-4e83-8493-aab35b1f206a-kube-api-access-9w44x\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n\" (UID: \"4ebf20ac-e131-4e83-8493-aab35b1f206a\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.507118 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ebf20ac-e131-4e83-8493-aab35b1f206a-bundle\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n\" (UID: \"4ebf20ac-e131-4e83-8493-aab35b1f206a\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.507238 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ebf20ac-e131-4e83-8493-aab35b1f206a-util\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n\" (UID: \"4ebf20ac-e131-4e83-8493-aab35b1f206a\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.507655 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ebf20ac-e131-4e83-8493-aab35b1f206a-bundle\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n\" (UID: \"4ebf20ac-e131-4e83-8493-aab35b1f206a\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.507754 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ebf20ac-e131-4e83-8493-aab35b1f206a-util\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n\" (UID: \"4ebf20ac-e131-4e83-8493-aab35b1f206a\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.537508 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w44x\" (UniqueName: \"kubernetes.io/projected/4ebf20ac-e131-4e83-8493-aab35b1f206a-kube-api-access-9w44x\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n\" (UID: \"4ebf20ac-e131-4e83-8493-aab35b1f206a\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" Nov 21 13:53:43 crc kubenswrapper[4675]: I1121 13:53:43.655408 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" Nov 21 13:53:44 crc kubenswrapper[4675]: I1121 13:53:44.093493 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n"] Nov 21 13:53:44 crc kubenswrapper[4675]: W1121 13:53:44.097238 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ebf20ac_e131_4e83_8493_aab35b1f206a.slice/crio-8300bf6c181b141b9a499f6ea6c271409abd8d55fc21ad273f57c390c376e968 WatchSource:0}: Error finding container 8300bf6c181b141b9a499f6ea6c271409abd8d55fc21ad273f57c390c376e968: Status 404 returned error can't find the container with id 8300bf6c181b141b9a499f6ea6c271409abd8d55fc21ad273f57c390c376e968 Nov 21 13:53:44 crc kubenswrapper[4675]: I1121 13:53:44.277960 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" event={"ID":"4ebf20ac-e131-4e83-8493-aab35b1f206a","Type":"ContainerStarted","Data":"537ecad32b6590e8bdc1e6a0814a8f2cbbc0c6d4cdec588a6514b11820f8d199"} Nov 21 13:53:44 crc kubenswrapper[4675]: I1121 13:53:44.278008 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" event={"ID":"4ebf20ac-e131-4e83-8493-aab35b1f206a","Type":"ContainerStarted","Data":"8300bf6c181b141b9a499f6ea6c271409abd8d55fc21ad273f57c390c376e968"} Nov 21 13:53:45 crc kubenswrapper[4675]: I1121 13:53:45.286856 4675 generic.go:334] "Generic (PLEG): container finished" podID="4ebf20ac-e131-4e83-8493-aab35b1f206a" containerID="537ecad32b6590e8bdc1e6a0814a8f2cbbc0c6d4cdec588a6514b11820f8d199" exitCode=0 Nov 21 13:53:45 crc kubenswrapper[4675]: I1121 13:53:45.286922 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" event={"ID":"4ebf20ac-e131-4e83-8493-aab35b1f206a","Type":"ContainerDied","Data":"537ecad32b6590e8bdc1e6a0814a8f2cbbc0c6d4cdec588a6514b11820f8d199"} Nov 21 13:53:46 crc kubenswrapper[4675]: I1121 13:53:46.305605 4675 generic.go:334] "Generic (PLEG): container finished" podID="4ebf20ac-e131-4e83-8493-aab35b1f206a" containerID="3f71ebed924994796cbf6c4d84ea97ce7ad8880defef545e3d02680411f10d10" exitCode=0 Nov 21 13:53:46 crc kubenswrapper[4675]: I1121 13:53:46.305891 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" event={"ID":"4ebf20ac-e131-4e83-8493-aab35b1f206a","Type":"ContainerDied","Data":"3f71ebed924994796cbf6c4d84ea97ce7ad8880defef545e3d02680411f10d10"} Nov 21 13:53:47 crc kubenswrapper[4675]: I1121 13:53:47.317440 4675 generic.go:334] "Generic (PLEG): container finished" podID="4ebf20ac-e131-4e83-8493-aab35b1f206a" containerID="aa8956ca86a4603325f5938714fc4cf0b94f6a0d3c80aac75f2e56d486aca87e" exitCode=0 Nov 21 13:53:47 crc kubenswrapper[4675]: I1121 13:53:47.317578 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" event={"ID":"4ebf20ac-e131-4e83-8493-aab35b1f206a","Type":"ContainerDied","Data":"aa8956ca86a4603325f5938714fc4cf0b94f6a0d3c80aac75f2e56d486aca87e"} Nov 21 13:53:48 crc kubenswrapper[4675]: I1121 13:53:48.632888 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" Nov 21 13:53:48 crc kubenswrapper[4675]: I1121 13:53:48.698832 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ebf20ac-e131-4e83-8493-aab35b1f206a-bundle\") pod \"4ebf20ac-e131-4e83-8493-aab35b1f206a\" (UID: \"4ebf20ac-e131-4e83-8493-aab35b1f206a\") " Nov 21 13:53:48 crc kubenswrapper[4675]: I1121 13:53:48.698950 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ebf20ac-e131-4e83-8493-aab35b1f206a-util\") pod \"4ebf20ac-e131-4e83-8493-aab35b1f206a\" (UID: \"4ebf20ac-e131-4e83-8493-aab35b1f206a\") " Nov 21 13:53:48 crc kubenswrapper[4675]: I1121 13:53:48.699036 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w44x\" (UniqueName: \"kubernetes.io/projected/4ebf20ac-e131-4e83-8493-aab35b1f206a-kube-api-access-9w44x\") pod \"4ebf20ac-e131-4e83-8493-aab35b1f206a\" (UID: \"4ebf20ac-e131-4e83-8493-aab35b1f206a\") " Nov 21 13:53:48 crc kubenswrapper[4675]: I1121 13:53:48.699583 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ebf20ac-e131-4e83-8493-aab35b1f206a-bundle" (OuterVolumeSpecName: "bundle") pod "4ebf20ac-e131-4e83-8493-aab35b1f206a" (UID: "4ebf20ac-e131-4e83-8493-aab35b1f206a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:53:48 crc kubenswrapper[4675]: I1121 13:53:48.703819 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ebf20ac-e131-4e83-8493-aab35b1f206a-kube-api-access-9w44x" (OuterVolumeSpecName: "kube-api-access-9w44x") pod "4ebf20ac-e131-4e83-8493-aab35b1f206a" (UID: "4ebf20ac-e131-4e83-8493-aab35b1f206a"). InnerVolumeSpecName "kube-api-access-9w44x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:53:48 crc kubenswrapper[4675]: I1121 13:53:48.712909 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ebf20ac-e131-4e83-8493-aab35b1f206a-util" (OuterVolumeSpecName: "util") pod "4ebf20ac-e131-4e83-8493-aab35b1f206a" (UID: "4ebf20ac-e131-4e83-8493-aab35b1f206a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:53:48 crc kubenswrapper[4675]: I1121 13:53:48.800873 4675 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ebf20ac-e131-4e83-8493-aab35b1f206a-util\") on node \"crc\" DevicePath \"\"" Nov 21 13:53:48 crc kubenswrapper[4675]: I1121 13:53:48.800912 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w44x\" (UniqueName: \"kubernetes.io/projected/4ebf20ac-e131-4e83-8493-aab35b1f206a-kube-api-access-9w44x\") on node \"crc\" DevicePath \"\"" Nov 21 13:53:48 crc kubenswrapper[4675]: I1121 13:53:48.800923 4675 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ebf20ac-e131-4e83-8493-aab35b1f206a-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:53:49 crc kubenswrapper[4675]: I1121 13:53:49.337433 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" event={"ID":"4ebf20ac-e131-4e83-8493-aab35b1f206a","Type":"ContainerDied","Data":"8300bf6c181b141b9a499f6ea6c271409abd8d55fc21ad273f57c390c376e968"} Nov 21 13:53:49 crc kubenswrapper[4675]: I1121 13:53:49.337483 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8300bf6c181b141b9a499f6ea6c271409abd8d55fc21ad273f57c390c376e968" Nov 21 13:53:49 crc kubenswrapper[4675]: I1121 13:53:49.337490 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n" Nov 21 13:53:55 crc kubenswrapper[4675]: I1121 13:53:55.983978 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-27rxv"] Nov 21 13:53:55 crc kubenswrapper[4675]: E1121 13:53:55.987609 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebf20ac-e131-4e83-8493-aab35b1f206a" containerName="util" Nov 21 13:53:55 crc kubenswrapper[4675]: I1121 13:53:55.987634 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebf20ac-e131-4e83-8493-aab35b1f206a" containerName="util" Nov 21 13:53:55 crc kubenswrapper[4675]: E1121 13:53:55.987667 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebf20ac-e131-4e83-8493-aab35b1f206a" containerName="extract" Nov 21 13:53:55 crc kubenswrapper[4675]: I1121 13:53:55.987675 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebf20ac-e131-4e83-8493-aab35b1f206a" containerName="extract" Nov 21 13:53:55 crc kubenswrapper[4675]: E1121 13:53:55.987694 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebf20ac-e131-4e83-8493-aab35b1f206a" containerName="pull" Nov 21 13:53:55 crc kubenswrapper[4675]: I1121 13:53:55.987703 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebf20ac-e131-4e83-8493-aab35b1f206a" containerName="pull" Nov 21 13:53:55 crc kubenswrapper[4675]: I1121 13:53:55.987873 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ebf20ac-e131-4e83-8493-aab35b1f206a" containerName="extract" Nov 21 13:53:55 crc kubenswrapper[4675]: I1121 13:53:55.988528 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-27rxv" Nov 21 13:53:55 crc kubenswrapper[4675]: I1121 13:53:55.991591 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-dd7m6" Nov 21 13:53:56 crc kubenswrapper[4675]: I1121 13:53:56.024977 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqc82\" (UniqueName: \"kubernetes.io/projected/867fb4c9-f1c0-49da-9a71-3372347fe4f2-kube-api-access-qqc82\") pod \"openstack-operator-controller-operator-7bc9ddc77b-27rxv\" (UID: \"867fb4c9-f1c0-49da-9a71-3372347fe4f2\") " pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-27rxv" Nov 21 13:53:56 crc kubenswrapper[4675]: I1121 13:53:56.025930 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-27rxv"] Nov 21 13:53:56 crc kubenswrapper[4675]: I1121 13:53:56.126270 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqc82\" (UniqueName: \"kubernetes.io/projected/867fb4c9-f1c0-49da-9a71-3372347fe4f2-kube-api-access-qqc82\") pod \"openstack-operator-controller-operator-7bc9ddc77b-27rxv\" (UID: \"867fb4c9-f1c0-49da-9a71-3372347fe4f2\") " pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-27rxv" Nov 21 13:53:56 crc kubenswrapper[4675]: I1121 13:53:56.153757 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqc82\" (UniqueName: \"kubernetes.io/projected/867fb4c9-f1c0-49da-9a71-3372347fe4f2-kube-api-access-qqc82\") pod \"openstack-operator-controller-operator-7bc9ddc77b-27rxv\" (UID: \"867fb4c9-f1c0-49da-9a71-3372347fe4f2\") " pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-27rxv" Nov 21 13:53:56 crc kubenswrapper[4675]: I1121 13:53:56.310907 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-27rxv" Nov 21 13:53:56 crc kubenswrapper[4675]: I1121 13:53:56.606734 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-27rxv"] Nov 21 13:53:57 crc kubenswrapper[4675]: I1121 13:53:57.405630 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-27rxv" event={"ID":"867fb4c9-f1c0-49da-9a71-3372347fe4f2","Type":"ContainerStarted","Data":"75c767ce7da2ff82fd2e8e895615a29402d7c268b642f4710032e581927993e0"} Nov 21 13:54:01 crc kubenswrapper[4675]: I1121 13:54:01.446648 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-27rxv" event={"ID":"867fb4c9-f1c0-49da-9a71-3372347fe4f2","Type":"ContainerStarted","Data":"345294b6199e627ebc03991f1c4a17203abbd0057610e0f1399afb1d69fa07bb"} Nov 21 13:54:01 crc kubenswrapper[4675]: I1121 13:54:01.447230 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-27rxv" Nov 21 13:54:01 crc kubenswrapper[4675]: I1121 13:54:01.473745 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-27rxv" podStartSLOduration=2.305942087 podStartE2EDuration="6.47372757s" podCreationTimestamp="2025-11-21 13:53:55 +0000 UTC" firstStartedPulling="2025-11-21 13:53:56.625211818 +0000 UTC m=+1313.351626545" lastFinishedPulling="2025-11-21 13:54:00.792997301 +0000 UTC m=+1317.519412028" observedRunningTime="2025-11-21 13:54:01.473399782 +0000 UTC m=+1318.199814519" watchObservedRunningTime="2025-11-21 13:54:01.47372757 +0000 UTC m=+1318.200142297" Nov 21 13:54:06 crc kubenswrapper[4675]: I1121 13:54:06.323452 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-27rxv" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.851405 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-k4lls"] Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.853287 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-k4lls" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.855750 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jwdrn" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.867845 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-h9twj"] Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.869310 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-h9twj" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.871438 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bjvzz" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.895569 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-h9twj"] Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.895785 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-dd9xn"] Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.897184 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-dd9xn" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.902442 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-6gpqx" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.905178 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-726dt"] Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.906578 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-726dt" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.910977 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-4rvj4" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.925884 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-dd9xn"] Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.941135 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-4gf9h"] Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.942786 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-4gf9h" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.945748 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-9m6rs" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.952119 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-k4lls"] Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.960539 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-4gf9h"] Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.967195 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr5b2\" (UniqueName: \"kubernetes.io/projected/f3631bac-6fa8-4ad8-bbad-df880af19292-kube-api-access-lr5b2\") pod \"barbican-operator-controller-manager-86dc4d89c8-k4lls\" (UID: \"f3631bac-6fa8-4ad8-bbad-df880af19292\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-k4lls" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.967342 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx2tz\" (UniqueName: \"kubernetes.io/projected/7e3588ab-94d7-482f-97c4-67d573181e2c-kube-api-access-fx2tz\") pod \"cinder-operator-controller-manager-79856dc55c-h9twj\" (UID: \"7e3588ab-94d7-482f-97c4-67d573181e2c\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-h9twj" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.967369 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zrth\" (UniqueName: \"kubernetes.io/projected/45c2d5a9-a319-4012-91de-77769b6ad913-kube-api-access-6zrth\") pod \"designate-operator-controller-manager-7d695c9b56-dd9xn\" (UID: \"45c2d5a9-a319-4012-91de-77769b6ad913\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-dd9xn" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.967394 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzbxx\" (UniqueName: \"kubernetes.io/projected/26fa3df8-f4d3-44d1-8e9b-c20dca446570-kube-api-access-pzbxx\") pod \"glance-operator-controller-manager-68b95954c9-726dt\" (UID: \"26fa3df8-f4d3-44d1-8e9b-c20dca446570\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-726dt" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.968864 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-726dt"] Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.985768 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-hld5f"] Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.987025 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-hld5f" Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.992230 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-hld5f"] Nov 21 13:54:28 crc kubenswrapper[4675]: I1121 13:54:28.993471 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-7wgfb" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.000101 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.001592 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.006731 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.006907 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-xqzls" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.014532 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.022151 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.024432 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.041525 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-kvbfd" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.041693 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.059695 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-lmpfg"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.061016 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-lmpfg" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.063082 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-vfwn4" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.071213 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx2tz\" (UniqueName: \"kubernetes.io/projected/7e3588ab-94d7-482f-97c4-67d573181e2c-kube-api-access-fx2tz\") pod \"cinder-operator-controller-manager-79856dc55c-h9twj\" (UID: \"7e3588ab-94d7-482f-97c4-67d573181e2c\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-h9twj" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.071249 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zrth\" (UniqueName: \"kubernetes.io/projected/45c2d5a9-a319-4012-91de-77769b6ad913-kube-api-access-6zrth\") pod \"designate-operator-controller-manager-7d695c9b56-dd9xn\" (UID: \"45c2d5a9-a319-4012-91de-77769b6ad913\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-dd9xn" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.071271 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzbxx\" (UniqueName: \"kubernetes.io/projected/26fa3df8-f4d3-44d1-8e9b-c20dca446570-kube-api-access-pzbxx\") pod \"glance-operator-controller-manager-68b95954c9-726dt\" (UID: \"26fa3df8-f4d3-44d1-8e9b-c20dca446570\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-726dt" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.071299 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smsgc\" (UniqueName: \"kubernetes.io/projected/a3c92b8e-62bc-4b54-b1ce-a32c275cd9ca-kube-api-access-smsgc\") pod \"horizon-operator-controller-manager-68c9694994-hld5f\" (UID: \"a3c92b8e-62bc-4b54-b1ce-a32c275cd9ca\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-hld5f" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.071338 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lflk\" (UniqueName: \"kubernetes.io/projected/43da63e0-75a0-4e90-9e81-3b3be38a45b1-kube-api-access-2lflk\") pod \"infra-operator-controller-manager-d5cc86f4b-qb7zq\" (UID: \"43da63e0-75a0-4e90-9e81-3b3be38a45b1\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.071358 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hgzg\" (UniqueName: \"kubernetes.io/projected/2edf1cc1-0bd0-4329-969f-c2890b507972-kube-api-access-7hgzg\") pod \"ironic-operator-controller-manager-5bfcdc958c-kxfcm\" (UID: \"2edf1cc1-0bd0-4329-969f-c2890b507972\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.071380 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43da63e0-75a0-4e90-9e81-3b3be38a45b1-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-qb7zq\" (UID: \"43da63e0-75a0-4e90-9e81-3b3be38a45b1\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.071407 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrltv\" (UniqueName: \"kubernetes.io/projected/2808d52f-0a70-48df-9b55-052faa81f93c-kube-api-access-qrltv\") pod \"heat-operator-controller-manager-774b86978c-4gf9h\" (UID: \"2808d52f-0a70-48df-9b55-052faa81f93c\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-4gf9h" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.071451 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr5b2\" (UniqueName: \"kubernetes.io/projected/f3631bac-6fa8-4ad8-bbad-df880af19292-kube-api-access-lr5b2\") pod \"barbican-operator-controller-manager-86dc4d89c8-k4lls\" (UID: \"f3631bac-6fa8-4ad8-bbad-df880af19292\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-k4lls" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.111078 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-lmpfg"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.114263 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx2tz\" (UniqueName: \"kubernetes.io/projected/7e3588ab-94d7-482f-97c4-67d573181e2c-kube-api-access-fx2tz\") pod \"cinder-operator-controller-manager-79856dc55c-h9twj\" (UID: \"7e3588ab-94d7-482f-97c4-67d573181e2c\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-h9twj" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.114807 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzbxx\" (UniqueName: \"kubernetes.io/projected/26fa3df8-f4d3-44d1-8e9b-c20dca446570-kube-api-access-pzbxx\") pod \"glance-operator-controller-manager-68b95954c9-726dt\" (UID: \"26fa3df8-f4d3-44d1-8e9b-c20dca446570\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-726dt" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.121044 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zrth\" (UniqueName: \"kubernetes.io/projected/45c2d5a9-a319-4012-91de-77769b6ad913-kube-api-access-6zrth\") pod \"designate-operator-controller-manager-7d695c9b56-dd9xn\" (UID: \"45c2d5a9-a319-4012-91de-77769b6ad913\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-dd9xn" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.130752 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.132084 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.151193 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-wb98q" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.153833 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr5b2\" (UniqueName: \"kubernetes.io/projected/f3631bac-6fa8-4ad8-bbad-df880af19292-kube-api-access-lr5b2\") pod \"barbican-operator-controller-manager-86dc4d89c8-k4lls\" (UID: \"f3631bac-6fa8-4ad8-bbad-df880af19292\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-k4lls" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.154436 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.174291 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smsgc\" (UniqueName: \"kubernetes.io/projected/a3c92b8e-62bc-4b54-b1ce-a32c275cd9ca-kube-api-access-smsgc\") pod \"horizon-operator-controller-manager-68c9694994-hld5f\" (UID: \"a3c92b8e-62bc-4b54-b1ce-a32c275cd9ca\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-hld5f" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.174563 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lflk\" (UniqueName: \"kubernetes.io/projected/43da63e0-75a0-4e90-9e81-3b3be38a45b1-kube-api-access-2lflk\") pod \"infra-operator-controller-manager-d5cc86f4b-qb7zq\" (UID: \"43da63e0-75a0-4e90-9e81-3b3be38a45b1\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.174601 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hgzg\" (UniqueName: \"kubernetes.io/projected/2edf1cc1-0bd0-4329-969f-c2890b507972-kube-api-access-7hgzg\") pod \"ironic-operator-controller-manager-5bfcdc958c-kxfcm\" (UID: \"2edf1cc1-0bd0-4329-969f-c2890b507972\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.174645 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43da63e0-75a0-4e90-9e81-3b3be38a45b1-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-qb7zq\" (UID: \"43da63e0-75a0-4e90-9e81-3b3be38a45b1\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.174701 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrltv\" (UniqueName: \"kubernetes.io/projected/2808d52f-0a70-48df-9b55-052faa81f93c-kube-api-access-qrltv\") pod \"heat-operator-controller-manager-774b86978c-4gf9h\" (UID: \"2808d52f-0a70-48df-9b55-052faa81f93c\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-4gf9h" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.174857 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf762\" (UniqueName: \"kubernetes.io/projected/89aec3aa-b2d8-4702-b0fd-005c6d51c669-kube-api-access-tf762\") pod \"keystone-operator-controller-manager-748dc6576f-lmpfg\" (UID: \"89aec3aa-b2d8-4702-b0fd-005c6d51c669\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-lmpfg" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.174901 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wxqz\" (UniqueName: \"kubernetes.io/projected/cc7542b4-b4d9-46e5-8819-784ec50c9c11-kube-api-access-7wxqz\") pod \"manila-operator-controller-manager-58bb8d67cc-lf6g7\" (UID: \"cc7542b4-b4d9-46e5-8819-784ec50c9c11\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7" Nov 21 13:54:29 crc kubenswrapper[4675]: E1121 13:54:29.183346 4675 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 21 13:54:29 crc kubenswrapper[4675]: E1121 13:54:29.183437 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43da63e0-75a0-4e90-9e81-3b3be38a45b1-cert podName:43da63e0-75a0-4e90-9e81-3b3be38a45b1 nodeName:}" failed. No retries permitted until 2025-11-21 13:54:29.683410221 +0000 UTC m=+1346.409824948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/43da63e0-75a0-4e90-9e81-3b3be38a45b1-cert") pod "infra-operator-controller-manager-d5cc86f4b-qb7zq" (UID: "43da63e0-75a0-4e90-9e81-3b3be38a45b1") : secret "infra-operator-webhook-server-cert" not found Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.188164 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-k4lls" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.189132 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-h9twj" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.226201 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-dd9xn" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.227901 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lflk\" (UniqueName: \"kubernetes.io/projected/43da63e0-75a0-4e90-9e81-3b3be38a45b1-kube-api-access-2lflk\") pod \"infra-operator-controller-manager-d5cc86f4b-qb7zq\" (UID: \"43da63e0-75a0-4e90-9e81-3b3be38a45b1\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.230551 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hgzg\" (UniqueName: \"kubernetes.io/projected/2edf1cc1-0bd0-4329-969f-c2890b507972-kube-api-access-7hgzg\") pod \"ironic-operator-controller-manager-5bfcdc958c-kxfcm\" (UID: \"2edf1cc1-0bd0-4329-969f-c2890b507972\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.231006 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrltv\" (UniqueName: \"kubernetes.io/projected/2808d52f-0a70-48df-9b55-052faa81f93c-kube-api-access-qrltv\") pod \"heat-operator-controller-manager-774b86978c-4gf9h\" (UID: \"2808d52f-0a70-48df-9b55-052faa81f93c\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-4gf9h" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.235507 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-726dt" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.240824 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smsgc\" (UniqueName: \"kubernetes.io/projected/a3c92b8e-62bc-4b54-b1ce-a32c275cd9ca-kube-api-access-smsgc\") pod \"horizon-operator-controller-manager-68c9694994-hld5f\" (UID: \"a3c92b8e-62bc-4b54-b1ce-a32c275cd9ca\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-hld5f" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.270227 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.274366 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-4gf9h" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.277141 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf762\" (UniqueName: \"kubernetes.io/projected/89aec3aa-b2d8-4702-b0fd-005c6d51c669-kube-api-access-tf762\") pod \"keystone-operator-controller-manager-748dc6576f-lmpfg\" (UID: \"89aec3aa-b2d8-4702-b0fd-005c6d51c669\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-lmpfg" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.280761 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wxqz\" (UniqueName: \"kubernetes.io/projected/cc7542b4-b4d9-46e5-8819-784ec50c9c11-kube-api-access-7wxqz\") pod \"manila-operator-controller-manager-58bb8d67cc-lf6g7\" (UID: \"cc7542b4-b4d9-46e5-8819-784ec50c9c11\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.281415 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.296589 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-t24dk" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.319876 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-hld5f" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.323807 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf762\" (UniqueName: \"kubernetes.io/projected/89aec3aa-b2d8-4702-b0fd-005c6d51c669-kube-api-access-tf762\") pod \"keystone-operator-controller-manager-748dc6576f-lmpfg\" (UID: \"89aec3aa-b2d8-4702-b0fd-005c6d51c669\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-lmpfg" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.352662 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-46782"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.362666 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.378616 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wxqz\" (UniqueName: \"kubernetes.io/projected/cc7542b4-b4d9-46e5-8819-784ec50c9c11-kube-api-access-7wxqz\") pod \"manila-operator-controller-manager-58bb8d67cc-lf6g7\" (UID: \"cc7542b4-b4d9-46e5-8819-784ec50c9c11\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.382096 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-46782" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.383158 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zglcj\" (UniqueName: \"kubernetes.io/projected/eb3d3afa-eaa5-4271-8a33-45a009a9742a-kube-api-access-zglcj\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-p4r2x\" (UID: \"eb3d3afa-eaa5-4271-8a33-45a009a9742a\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.386867 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-ch8rm" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.393553 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-lmpfg" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.402822 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.430492 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.446805 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.453957 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2b8b4" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.474315 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-46782"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.485170 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l262s\" (UniqueName: \"kubernetes.io/projected/693e699a-cdc4-4282-9ba6-6947c3e42726-kube-api-access-l262s\") pod \"nova-operator-controller-manager-79556f57fc-nmtt7\" (UID: \"693e699a-cdc4-4282-9ba6-6947c3e42726\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.485353 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zglcj\" (UniqueName: \"kubernetes.io/projected/eb3d3afa-eaa5-4271-8a33-45a009a9742a-kube-api-access-zglcj\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-p4r2x\" (UID: \"eb3d3afa-eaa5-4271-8a33-45a009a9742a\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.485394 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn6sj\" (UniqueName: \"kubernetes.io/projected/27a202b7-1cf0-4dda-a010-6d59fbe881ed-kube-api-access-dn6sj\") pod \"neutron-operator-controller-manager-7c57c8bbc4-46782\" (UID: \"27a202b7-1cf0-4dda-a010-6d59fbe881ed\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-46782" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.506967 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.523704 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zglcj\" (UniqueName: \"kubernetes.io/projected/eb3d3afa-eaa5-4271-8a33-45a009a9742a-kube-api-access-zglcj\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-p4r2x\" (UID: \"eb3d3afa-eaa5-4271-8a33-45a009a9742a\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.532858 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.535712 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.540761 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-52r89" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.586489 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.588588 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l262s\" (UniqueName: \"kubernetes.io/projected/693e699a-cdc4-4282-9ba6-6947c3e42726-kube-api-access-l262s\") pod \"nova-operator-controller-manager-79556f57fc-nmtt7\" (UID: \"693e699a-cdc4-4282-9ba6-6947c3e42726\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.588810 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkgrd\" (UniqueName: \"kubernetes.io/projected/e8768c70-accf-460e-a781-b5d9eff26f2e-kube-api-access-jkgrd\") pod \"octavia-operator-controller-manager-fd75fd47d-bhcrz\" (UID: \"e8768c70-accf-460e-a781-b5d9eff26f2e\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.588864 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn6sj\" (UniqueName: \"kubernetes.io/projected/27a202b7-1cf0-4dda-a010-6d59fbe881ed-kube-api-access-dn6sj\") pod \"neutron-operator-controller-manager-7c57c8bbc4-46782\" (UID: \"27a202b7-1cf0-4dda-a010-6d59fbe881ed\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-46782" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.599327 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.614530 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.618202 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.620958 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l262s\" (UniqueName: \"kubernetes.io/projected/693e699a-cdc4-4282-9ba6-6947c3e42726-kube-api-access-l262s\") pod \"nova-operator-controller-manager-79556f57fc-nmtt7\" (UID: \"693e699a-cdc4-4282-9ba6-6947c3e42726\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.624167 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.629982 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7d9f8" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.630517 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.634392 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.634437 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9v2zt" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.643599 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.646626 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn6sj\" (UniqueName: \"kubernetes.io/projected/27a202b7-1cf0-4dda-a010-6d59fbe881ed-kube-api-access-dn6sj\") pod \"neutron-operator-controller-manager-7c57c8bbc4-46782\" (UID: \"27a202b7-1cf0-4dda-a010-6d59fbe881ed\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-46782" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.655025 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.667223 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.676421 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.678219 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.687587 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9jbwj" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.687789 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.690787 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43da63e0-75a0-4e90-9e81-3b3be38a45b1-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-qb7zq\" (UID: \"43da63e0-75a0-4e90-9e81-3b3be38a45b1\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.690832 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkgrd\" (UniqueName: \"kubernetes.io/projected/e8768c70-accf-460e-a781-b5d9eff26f2e-kube-api-access-jkgrd\") pod \"octavia-operator-controller-manager-fd75fd47d-bhcrz\" (UID: \"e8768c70-accf-460e-a781-b5d9eff26f2e\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.690878 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7nq6\" (UniqueName: \"kubernetes.io/projected/5661f60c-1801-419e-abaa-7f5e0825f148-kube-api-access-d7nq6\") pod \"placement-operator-controller-manager-5db546f9d9-dcswk\" (UID: \"5661f60c-1801-419e-abaa-7f5e0825f148\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.690917 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mv8c\" (UniqueName: \"kubernetes.io/projected/77644f3e-1a90-4f49-a43c-b3d5b23c8184-kube-api-access-8mv8c\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-cwz25\" (UID: \"77644f3e-1a90-4f49-a43c-b3d5b23c8184\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.690940 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77644f3e-1a90-4f49-a43c-b3d5b23c8184-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-cwz25\" (UID: \"77644f3e-1a90-4f49-a43c-b3d5b23c8184\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.690998 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stjvj\" (UniqueName: \"kubernetes.io/projected/0aaf2b35-164b-400a-ad78-84961c2a599c-kube-api-access-stjvj\") pod \"ovn-operator-controller-manager-66cf5c67ff-p2fwv\" (UID: \"0aaf2b35-164b-400a-ad78-84961c2a599c\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.709507 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.711290 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.714990 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4rtkd" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.718322 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.719906 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.722513 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43da63e0-75a0-4e90-9e81-3b3be38a45b1-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-qb7zq\" (UID: \"43da63e0-75a0-4e90-9e81-3b3be38a45b1\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.724940 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-k775h" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.727400 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.733009 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.733501 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-46782" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.741921 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkgrd\" (UniqueName: \"kubernetes.io/projected/e8768c70-accf-460e-a781-b5d9eff26f2e-kube-api-access-jkgrd\") pod \"octavia-operator-controller-manager-fd75fd47d-bhcrz\" (UID: \"e8768c70-accf-460e-a781-b5d9eff26f2e\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.750488 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-2l49b"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.752855 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2l49b" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.775569 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-vx89z" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.782044 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-2l49b"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.796317 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7nq6\" (UniqueName: \"kubernetes.io/projected/5661f60c-1801-419e-abaa-7f5e0825f148-kube-api-access-d7nq6\") pod \"placement-operator-controller-manager-5db546f9d9-dcswk\" (UID: \"5661f60c-1801-419e-abaa-7f5e0825f148\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.796377 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mv8c\" (UniqueName: \"kubernetes.io/projected/77644f3e-1a90-4f49-a43c-b3d5b23c8184-kube-api-access-8mv8c\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-cwz25\" (UID: \"77644f3e-1a90-4f49-a43c-b3d5b23c8184\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.796404 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77644f3e-1a90-4f49-a43c-b3d5b23c8184-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-cwz25\" (UID: \"77644f3e-1a90-4f49-a43c-b3d5b23c8184\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.796479 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stjvj\" (UniqueName: \"kubernetes.io/projected/0aaf2b35-164b-400a-ad78-84961c2a599c-kube-api-access-stjvj\") pod \"ovn-operator-controller-manager-66cf5c67ff-p2fwv\" (UID: \"0aaf2b35-164b-400a-ad78-84961c2a599c\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv" Nov 21 13:54:29 crc kubenswrapper[4675]: E1121 13:54:29.797095 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 13:54:29 crc kubenswrapper[4675]: E1121 13:54:29.797146 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77644f3e-1a90-4f49-a43c-b3d5b23c8184-cert podName:77644f3e-1a90-4f49-a43c-b3d5b23c8184 nodeName:}" failed. No retries permitted until 2025-11-21 13:54:30.297131801 +0000 UTC m=+1347.023546528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/77644f3e-1a90-4f49-a43c-b3d5b23c8184-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" (UID: "77644f3e-1a90-4f49-a43c-b3d5b23c8184") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.834092 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.834713 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stjvj\" (UniqueName: \"kubernetes.io/projected/0aaf2b35-164b-400a-ad78-84961c2a599c-kube-api-access-stjvj\") pod \"ovn-operator-controller-manager-66cf5c67ff-p2fwv\" (UID: \"0aaf2b35-164b-400a-ad78-84961c2a599c\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.838585 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mv8c\" (UniqueName: \"kubernetes.io/projected/77644f3e-1a90-4f49-a43c-b3d5b23c8184-kube-api-access-8mv8c\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-cwz25\" (UID: \"77644f3e-1a90-4f49-a43c-b3d5b23c8184\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.838646 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-55qzb"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.839969 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-55qzb" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.841476 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7nq6\" (UniqueName: \"kubernetes.io/projected/5661f60c-1801-419e-abaa-7f5e0825f148-kube-api-access-d7nq6\") pod \"placement-operator-controller-manager-5db546f9d9-dcswk\" (UID: \"5661f60c-1801-419e-abaa-7f5e0825f148\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.843824 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-gbb9x" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.863301 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.866558 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-55qzb"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.897715 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzn8c\" (UniqueName: \"kubernetes.io/projected/dd5c12ec-ee26-458d-85f3-2b6bd7c021f1-kube-api-access-pzn8c\") pod \"test-operator-controller-manager-5cb74df96-2l49b\" (UID: \"dd5c12ec-ee26-458d-85f3-2b6bd7c021f1\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-2l49b" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.897941 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxvtl\" (UniqueName: \"kubernetes.io/projected/e5d16474-89e2-4e35-8339-24afbb962e4b-kube-api-access-bxvtl\") pod \"swift-operator-controller-manager-6fdc4fcf86-rpmxk\" (UID: \"e5d16474-89e2-4e35-8339-24afbb962e4b\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.898096 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjhq7\" (UniqueName: \"kubernetes.io/projected/cfb97bdd-e357-475c-ab5c-184e50acb0dc-kube-api-access-bjhq7\") pod \"telemetry-operator-controller-manager-7fc59d4bfd-8swxd\" (UID: \"cfb97bdd-e357-475c-ab5c-184e50acb0dc\") " pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.900774 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.902026 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.904734 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.905050 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.905558 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4mvk2" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.943817 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.973177 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.978749 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.985685 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-65k5k"] Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.989854 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-65k5k" Nov 21 13:54:29 crc kubenswrapper[4675]: I1121 13:54:29.994557 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-txxrr" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.000984 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzn8c\" (UniqueName: \"kubernetes.io/projected/dd5c12ec-ee26-458d-85f3-2b6bd7c021f1-kube-api-access-pzn8c\") pod \"test-operator-controller-manager-5cb74df96-2l49b\" (UID: \"dd5c12ec-ee26-458d-85f3-2b6bd7c021f1\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-2l49b" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.001052 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-65k5k"] Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.001140 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxvtl\" (UniqueName: \"kubernetes.io/projected/e5d16474-89e2-4e35-8339-24afbb962e4b-kube-api-access-bxvtl\") pod \"swift-operator-controller-manager-6fdc4fcf86-rpmxk\" (UID: \"e5d16474-89e2-4e35-8339-24afbb962e4b\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.001190 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-webhook-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.001239 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjhq7\" (UniqueName: \"kubernetes.io/projected/cfb97bdd-e357-475c-ab5c-184e50acb0dc-kube-api-access-bjhq7\") pod \"telemetry-operator-controller-manager-7fc59d4bfd-8swxd\" (UID: \"cfb97bdd-e357-475c-ab5c-184e50acb0dc\") " pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.001383 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-metrics-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.001412 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfpln\" (UniqueName: \"kubernetes.io/projected/fc4f9f2a-5093-4df0-919f-037e57993a93-kube-api-access-mfpln\") pod \"watcher-operator-controller-manager-864885998-55qzb\" (UID: \"fc4f9f2a-5093-4df0-919f-037e57993a93\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-55qzb" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.001536 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r6pv\" (UniqueName: \"kubernetes.io/projected/376edcff-4439-418a-80e3-6f6309cdb8f0-kube-api-access-5r6pv\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.043624 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.079169 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-k4lls"] Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.087365 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxvtl\" (UniqueName: \"kubernetes.io/projected/e5d16474-89e2-4e35-8339-24afbb962e4b-kube-api-access-bxvtl\") pod \"swift-operator-controller-manager-6fdc4fcf86-rpmxk\" (UID: \"e5d16474-89e2-4e35-8339-24afbb962e4b\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.094921 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjhq7\" (UniqueName: \"kubernetes.io/projected/cfb97bdd-e357-475c-ab5c-184e50acb0dc-kube-api-access-bjhq7\") pod \"telemetry-operator-controller-manager-7fc59d4bfd-8swxd\" (UID: \"cfb97bdd-e357-475c-ab5c-184e50acb0dc\") " pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.095313 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzn8c\" (UniqueName: \"kubernetes.io/projected/dd5c12ec-ee26-458d-85f3-2b6bd7c021f1-kube-api-access-pzn8c\") pod \"test-operator-controller-manager-5cb74df96-2l49b\" (UID: \"dd5c12ec-ee26-458d-85f3-2b6bd7c021f1\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-2l49b" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.096278 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.102768 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-webhook-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.103160 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-metrics-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.103683 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfpln\" (UniqueName: \"kubernetes.io/projected/fc4f9f2a-5093-4df0-919f-037e57993a93-kube-api-access-mfpln\") pod \"watcher-operator-controller-manager-864885998-55qzb\" (UID: \"fc4f9f2a-5093-4df0-919f-037e57993a93\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-55qzb" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.103825 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhq7g\" (UniqueName: \"kubernetes.io/projected/fec54436-1bcf-4e1e-af27-d86372b07bbe-kube-api-access-nhq7g\") pod \"rabbitmq-cluster-operator-manager-668c99d594-65k5k\" (UID: \"fec54436-1bcf-4e1e-af27-d86372b07bbe\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-65k5k" Nov 21 13:54:30 crc kubenswrapper[4675]: E1121 13:54:30.103917 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 21 13:54:30 crc kubenswrapper[4675]: E1121 13:54:30.103934 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 21 13:54:30 crc kubenswrapper[4675]: E1121 13:54:30.103993 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-metrics-certs podName:376edcff-4439-418a-80e3-6f6309cdb8f0 nodeName:}" failed. No retries permitted until 2025-11-21 13:54:30.603974125 +0000 UTC m=+1347.330388842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-metrics-certs") pod "openstack-operator-controller-manager-79fb5496bb-5zhcc" (UID: "376edcff-4439-418a-80e3-6f6309cdb8f0") : secret "metrics-server-cert" not found Nov 21 13:54:30 crc kubenswrapper[4675]: E1121 13:54:30.104009 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-webhook-certs podName:376edcff-4439-418a-80e3-6f6309cdb8f0 nodeName:}" failed. No retries permitted until 2025-11-21 13:54:30.604003236 +0000 UTC m=+1347.330417963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-webhook-certs") pod "openstack-operator-controller-manager-79fb5496bb-5zhcc" (UID: "376edcff-4439-418a-80e3-6f6309cdb8f0") : secret "webhook-server-cert" not found Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.104190 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r6pv\" (UniqueName: \"kubernetes.io/projected/376edcff-4439-418a-80e3-6f6309cdb8f0-kube-api-access-5r6pv\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.120631 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2l49b" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.133114 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r6pv\" (UniqueName: \"kubernetes.io/projected/376edcff-4439-418a-80e3-6f6309cdb8f0-kube-api-access-5r6pv\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.145010 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfpln\" (UniqueName: \"kubernetes.io/projected/fc4f9f2a-5093-4df0-919f-037e57993a93-kube-api-access-mfpln\") pod \"watcher-operator-controller-manager-864885998-55qzb\" (UID: \"fc4f9f2a-5093-4df0-919f-037e57993a93\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-55qzb" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.168024 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-55qzb" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.197993 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-h9twj"] Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.206648 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhq7g\" (UniqueName: \"kubernetes.io/projected/fec54436-1bcf-4e1e-af27-d86372b07bbe-kube-api-access-nhq7g\") pod \"rabbitmq-cluster-operator-manager-668c99d594-65k5k\" (UID: \"fec54436-1bcf-4e1e-af27-d86372b07bbe\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-65k5k" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.242207 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhq7g\" (UniqueName: \"kubernetes.io/projected/fec54436-1bcf-4e1e-af27-d86372b07bbe-kube-api-access-nhq7g\") pod \"rabbitmq-cluster-operator-manager-668c99d594-65k5k\" (UID: \"fec54436-1bcf-4e1e-af27-d86372b07bbe\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-65k5k" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.310049 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77644f3e-1a90-4f49-a43c-b3d5b23c8184-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-cwz25\" (UID: \"77644f3e-1a90-4f49-a43c-b3d5b23c8184\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" Nov 21 13:54:30 crc kubenswrapper[4675]: E1121 13:54:30.310295 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 13:54:30 crc kubenswrapper[4675]: E1121 13:54:30.310340 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77644f3e-1a90-4f49-a43c-b3d5b23c8184-cert podName:77644f3e-1a90-4f49-a43c-b3d5b23c8184 nodeName:}" failed. No retries permitted until 2025-11-21 13:54:31.310326536 +0000 UTC m=+1348.036741263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/77644f3e-1a90-4f49-a43c-b3d5b23c8184-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" (UID: "77644f3e-1a90-4f49-a43c-b3d5b23c8184") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.326188 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-65k5k" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.378081 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.594060 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-dd9xn"] Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.618522 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-726dt"] Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.620148 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-webhook-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.620241 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-metrics-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:30 crc kubenswrapper[4675]: E1121 13:54:30.620387 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 21 13:54:30 crc kubenswrapper[4675]: E1121 13:54:30.620472 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-webhook-certs podName:376edcff-4439-418a-80e3-6f6309cdb8f0 nodeName:}" failed. No retries permitted until 2025-11-21 13:54:31.620447952 +0000 UTC m=+1348.346862679 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-webhook-certs") pod "openstack-operator-controller-manager-79fb5496bb-5zhcc" (UID: "376edcff-4439-418a-80e3-6f6309cdb8f0") : secret "webhook-server-cert" not found Nov 21 13:54:30 crc kubenswrapper[4675]: E1121 13:54:30.620672 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 21 13:54:30 crc kubenswrapper[4675]: E1121 13:54:30.620735 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-metrics-certs podName:376edcff-4439-418a-80e3-6f6309cdb8f0 nodeName:}" failed. No retries permitted until 2025-11-21 13:54:31.620718149 +0000 UTC m=+1348.347132926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-metrics-certs") pod "openstack-operator-controller-manager-79fb5496bb-5zhcc" (UID: "376edcff-4439-418a-80e3-6f6309cdb8f0") : secret "metrics-server-cert" not found Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.712835 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-dd9xn" event={"ID":"45c2d5a9-a319-4012-91de-77769b6ad913","Type":"ContainerStarted","Data":"ba3e6b4d992ff50c53a4f2ad7e2c1af2f27d3515421dcdd5ab7086e87eef67e2"} Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.717519 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-h9twj" event={"ID":"7e3588ab-94d7-482f-97c4-67d573181e2c","Type":"ContainerStarted","Data":"45635471ad0cf8d2ab241d5b19a7623346189cbb236630a574719f6ad43d6e23"} Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.724614 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-k4lls" event={"ID":"f3631bac-6fa8-4ad8-bbad-df880af19292","Type":"ContainerStarted","Data":"2c293b5c5a3deaf34a48d8732e57abf00a742301a6f140f4229922b0bae99b5d"} Nov 21 13:54:30 crc kubenswrapper[4675]: I1121 13:54:30.726938 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-726dt" event={"ID":"26fa3df8-f4d3-44d1-8e9b-c20dca446570","Type":"ContainerStarted","Data":"62538b3c5337016c85c505eff7058b4f547aab3b6cd66b76cf95a31f5c5abcc4"} Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.026792 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-lmpfg"] Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.041172 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-4gf9h"] Nov 21 13:54:31 crc kubenswrapper[4675]: W1121 13:54:31.043955 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89aec3aa_b2d8_4702_b0fd_005c6d51c669.slice/crio-79fe68359032ccc950fa86543a7b83bd1e746dd15dd00ea8e9a923c506b7568e WatchSource:0}: Error finding container 79fe68359032ccc950fa86543a7b83bd1e746dd15dd00ea8e9a923c506b7568e: Status 404 returned error can't find the container with id 79fe68359032ccc950fa86543a7b83bd1e746dd15dd00ea8e9a923c506b7568e Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.068715 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-hld5f"] Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.356417 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77644f3e-1a90-4f49-a43c-b3d5b23c8184-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-cwz25\" (UID: \"77644f3e-1a90-4f49-a43c-b3d5b23c8184\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" Nov 21 13:54:31 crc kubenswrapper[4675]: E1121 13:54:31.356684 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 13:54:31 crc kubenswrapper[4675]: E1121 13:54:31.356766 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77644f3e-1a90-4f49-a43c-b3d5b23c8184-cert podName:77644f3e-1a90-4f49-a43c-b3d5b23c8184 nodeName:}" failed. No retries permitted until 2025-11-21 13:54:33.356743686 +0000 UTC m=+1350.083158413 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/77644f3e-1a90-4f49-a43c-b3d5b23c8184-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" (UID: "77644f3e-1a90-4f49-a43c-b3d5b23c8184") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.660879 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-webhook-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.661032 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-metrics-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:31 crc kubenswrapper[4675]: E1121 13:54:31.661040 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 21 13:54:31 crc kubenswrapper[4675]: E1121 13:54:31.661123 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-webhook-certs podName:376edcff-4439-418a-80e3-6f6309cdb8f0 nodeName:}" failed. No retries permitted until 2025-11-21 13:54:33.661104189 +0000 UTC m=+1350.387518926 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-webhook-certs") pod "openstack-operator-controller-manager-79fb5496bb-5zhcc" (UID: "376edcff-4439-418a-80e3-6f6309cdb8f0") : secret "webhook-server-cert" not found Nov 21 13:54:31 crc kubenswrapper[4675]: E1121 13:54:31.661507 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 21 13:54:31 crc kubenswrapper[4675]: E1121 13:54:31.661756 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-metrics-certs podName:376edcff-4439-418a-80e3-6f6309cdb8f0 nodeName:}" failed. No retries permitted until 2025-11-21 13:54:33.66156288 +0000 UTC m=+1350.387977667 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-metrics-certs") pod "openstack-operator-controller-manager-79fb5496bb-5zhcc" (UID: "376edcff-4439-418a-80e3-6f6309cdb8f0") : secret "metrics-server-cert" not found Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.710777 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7"] Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.723416 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv"] Nov 21 13:54:31 crc kubenswrapper[4675]: W1121 13:54:31.734446 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod693e699a_cdc4_4282_9ba6_6947c3e42726.slice/crio-e9de185e74226619643db74e0c4bc3ead2c6aecefc7ff12ca19d80cea75a875b WatchSource:0}: Error finding container e9de185e74226619643db74e0c4bc3ead2c6aecefc7ff12ca19d80cea75a875b: Status 404 returned error can't find the container with id e9de185e74226619643db74e0c4bc3ead2c6aecefc7ff12ca19d80cea75a875b Nov 21 13:54:31 crc kubenswrapper[4675]: W1121 13:54:31.749694 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2edf1cc1_0bd0_4329_969f_c2890b507972.slice/crio-102e141a0aa810617f76bdef35325baa74edfaa60a23cf5eec7f5b2094cb493a WatchSource:0}: Error finding container 102e141a0aa810617f76bdef35325baa74edfaa60a23cf5eec7f5b2094cb493a: Status 404 returned error can't find the container with id 102e141a0aa810617f76bdef35325baa74edfaa60a23cf5eec7f5b2094cb493a Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.754082 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm"] Nov 21 13:54:31 crc kubenswrapper[4675]: W1121 13:54:31.759545 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0aaf2b35_164b_400a_ad78_84961c2a599c.slice/crio-2540a32665ad5b195c84e898634e6d04bda1ebc117912a640047bee17f389233 WatchSource:0}: Error finding container 2540a32665ad5b195c84e898634e6d04bda1ebc117912a640047bee17f389233: Status 404 returned error can't find the container with id 2540a32665ad5b195c84e898634e6d04bda1ebc117912a640047bee17f389233 Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.763342 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-lmpfg" event={"ID":"89aec3aa-b2d8-4702-b0fd-005c6d51c669","Type":"ContainerStarted","Data":"79fe68359032ccc950fa86543a7b83bd1e746dd15dd00ea8e9a923c506b7568e"} Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.768613 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq"] Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.778425 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7"] Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.785556 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz"] Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.800863 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x"] Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.806723 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-4gf9h" event={"ID":"2808d52f-0a70-48df-9b55-052faa81f93c","Type":"ContainerStarted","Data":"7ff2e0d0fcb87dbd12d18008416f46123c1eb95873df047ee4c93f800c724219"} Nov 21 13:54:31 crc kubenswrapper[4675]: W1121 13:54:31.812822 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43da63e0_75a0_4e90_9e81_3b3be38a45b1.slice/crio-d735ed30a4f2523937758e93691248c5270ce094cc11e0151f825faaf02cf67f WatchSource:0}: Error finding container d735ed30a4f2523937758e93691248c5270ce094cc11e0151f825faaf02cf67f: Status 404 returned error can't find the container with id d735ed30a4f2523937758e93691248c5270ce094cc11e0151f825faaf02cf67f Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.819444 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-hld5f" event={"ID":"a3c92b8e-62bc-4b54-b1ce-a32c275cd9ca","Type":"ContainerStarted","Data":"91ceb2554fc341f84653e1b53a389f5b89c6a1b0b2844a9bf004b5ab40811c7b"} Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.971586 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-46782"] Nov 21 13:54:31 crc kubenswrapper[4675]: I1121 13:54:31.997525 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk"] Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.008669 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd"] Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.019106 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-65k5k"] Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.157063 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-2l49b"] Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.171245 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-55qzb"] Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.210303 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk"] Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.828992 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd" event={"ID":"cfb97bdd-e357-475c-ab5c-184e50acb0dc","Type":"ContainerStarted","Data":"5b6dc7c77e1ee5d6b379a84735b32099aaab902e37eafc30caed16e967854580"} Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.830294 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv" event={"ID":"0aaf2b35-164b-400a-ad78-84961c2a599c","Type":"ContainerStarted","Data":"2540a32665ad5b195c84e898634e6d04bda1ebc117912a640047bee17f389233"} Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.831882 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7" event={"ID":"cc7542b4-b4d9-46e5-8819-784ec50c9c11","Type":"ContainerStarted","Data":"94573695d38d2ec2fa07938273ef1070f5016b6362eb16fcae772c641dbeddd1"} Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.833746 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-46782" event={"ID":"27a202b7-1cf0-4dda-a010-6d59fbe881ed","Type":"ContainerStarted","Data":"73cb8b02978c7ce1fbf82275ea4d40be0037c348a32636114f475534137c7d21"} Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.834971 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x" event={"ID":"eb3d3afa-eaa5-4271-8a33-45a009a9742a","Type":"ContainerStarted","Data":"d67a5b2a6427861e0696c4e484b6f8d187eeeb64f65b3c0f507ce4bcf9bc2101"} Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.836343 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7" event={"ID":"693e699a-cdc4-4282-9ba6-6947c3e42726","Type":"ContainerStarted","Data":"e9de185e74226619643db74e0c4bc3ead2c6aecefc7ff12ca19d80cea75a875b"} Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.837979 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm" event={"ID":"2edf1cc1-0bd0-4329-969f-c2890b507972","Type":"ContainerStarted","Data":"102e141a0aa810617f76bdef35325baa74edfaa60a23cf5eec7f5b2094cb493a"} Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.839081 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz" event={"ID":"e8768c70-accf-460e-a781-b5d9eff26f2e","Type":"ContainerStarted","Data":"88b27c2d96be1911e84bd39e02cecde8e86f4029c1c04adb9803b153d3c349d2"} Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.840397 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk" event={"ID":"5661f60c-1801-419e-abaa-7f5e0825f148","Type":"ContainerStarted","Data":"0899334d9694f6902a08ec72bd784ea8ef6793f5b5ace594cd8b061a159d69a4"} Nov 21 13:54:32 crc kubenswrapper[4675]: I1121 13:54:32.842913 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" event={"ID":"43da63e0-75a0-4e90-9e81-3b3be38a45b1","Type":"ContainerStarted","Data":"d735ed30a4f2523937758e93691248c5270ce094cc11e0151f825faaf02cf67f"} Nov 21 13:54:32 crc kubenswrapper[4675]: W1121 13:54:32.938586 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd5c12ec_ee26_458d_85f3_2b6bd7c021f1.slice/crio-ac18f136bd923947ee8c7f46ddee9123b4e8fe721fefc50db7d2fba628256a2b WatchSource:0}: Error finding container ac18f136bd923947ee8c7f46ddee9123b4e8fe721fefc50db7d2fba628256a2b: Status 404 returned error can't find the container with id ac18f136bd923947ee8c7f46ddee9123b4e8fe721fefc50db7d2fba628256a2b Nov 21 13:54:33 crc kubenswrapper[4675]: I1121 13:54:33.417861 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77644f3e-1a90-4f49-a43c-b3d5b23c8184-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-cwz25\" (UID: \"77644f3e-1a90-4f49-a43c-b3d5b23c8184\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" Nov 21 13:54:33 crc kubenswrapper[4675]: I1121 13:54:33.445154 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77644f3e-1a90-4f49-a43c-b3d5b23c8184-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-cwz25\" (UID: \"77644f3e-1a90-4f49-a43c-b3d5b23c8184\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" Nov 21 13:54:33 crc kubenswrapper[4675]: I1121 13:54:33.616596 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" Nov 21 13:54:33 crc kubenswrapper[4675]: I1121 13:54:33.724586 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-webhook-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:33 crc kubenswrapper[4675]: I1121 13:54:33.724674 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-metrics-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:33 crc kubenswrapper[4675]: I1121 13:54:33.730143 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-metrics-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:33 crc kubenswrapper[4675]: I1121 13:54:33.731228 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/376edcff-4439-418a-80e3-6f6309cdb8f0-webhook-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-5zhcc\" (UID: \"376edcff-4439-418a-80e3-6f6309cdb8f0\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:33 crc kubenswrapper[4675]: I1121 13:54:33.837341 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:54:33 crc kubenswrapper[4675]: I1121 13:54:33.852022 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-55qzb" event={"ID":"fc4f9f2a-5093-4df0-919f-037e57993a93","Type":"ContainerStarted","Data":"c24df58fc0f27659222b54f12e5fa8cad6db3546425fa66fc2b395ff06ebde7c"} Nov 21 13:54:33 crc kubenswrapper[4675]: I1121 13:54:33.853385 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2l49b" event={"ID":"dd5c12ec-ee26-458d-85f3-2b6bd7c021f1","Type":"ContainerStarted","Data":"ac18f136bd923947ee8c7f46ddee9123b4e8fe721fefc50db7d2fba628256a2b"} Nov 21 13:54:37 crc kubenswrapper[4675]: W1121 13:54:37.674632 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfec54436_1bcf_4e1e_af27_d86372b07bbe.slice/crio-b28ffb96b45cebe640f76c98e62193069f1c8b5d6ba6c6e2517ada448c36c6e7 WatchSource:0}: Error finding container b28ffb96b45cebe640f76c98e62193069f1c8b5d6ba6c6e2517ada448c36c6e7: Status 404 returned error can't find the container with id b28ffb96b45cebe640f76c98e62193069f1c8b5d6ba6c6e2517ada448c36c6e7 Nov 21 13:54:37 crc kubenswrapper[4675]: W1121 13:54:37.681953 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5d16474_89e2_4e35_8339_24afbb962e4b.slice/crio-cf86ddbf1a528f1d2ef5d7a0ae8378a96ae4ed3a9461877748df1017076b755d WatchSource:0}: Error finding container cf86ddbf1a528f1d2ef5d7a0ae8378a96ae4ed3a9461877748df1017076b755d: Status 404 returned error can't find the container with id cf86ddbf1a528f1d2ef5d7a0ae8378a96ae4ed3a9461877748df1017076b755d Nov 21 13:54:37 crc kubenswrapper[4675]: I1121 13:54:37.889931 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-65k5k" event={"ID":"fec54436-1bcf-4e1e-af27-d86372b07bbe","Type":"ContainerStarted","Data":"b28ffb96b45cebe640f76c98e62193069f1c8b5d6ba6c6e2517ada448c36c6e7"} Nov 21 13:54:37 crc kubenswrapper[4675]: I1121 13:54:37.891028 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk" event={"ID":"e5d16474-89e2-4e35-8339-24afbb962e4b","Type":"ContainerStarted","Data":"cf86ddbf1a528f1d2ef5d7a0ae8378a96ae4ed3a9461877748df1017076b755d"} Nov 21 13:54:46 crc kubenswrapper[4675]: I1121 13:54:46.136571 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:54:46 crc kubenswrapper[4675]: I1121 13:54:46.137121 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:54:53 crc kubenswrapper[4675]: E1121 13:54:53.122804 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a" Nov 21 13:54:53 crc kubenswrapper[4675]: E1121 13:54:53.123942 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7wxqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58bb8d67cc-lf6g7_openstack-operators(cc7542b4-b4d9-46e5-8819-784ec50c9c11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:54:53 crc kubenswrapper[4675]: E1121 13:54:53.169922 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:9874a68632ac0f050a94f2a5598ce6867128dc558db9504cf8fa6720039be868: Get \"http://38.102.83.9:5001/v2/openstack-k8s-operators/telemetry-operator/blobs/sha256:9874a68632ac0f050a94f2a5598ce6867128dc558db9504cf8fa6720039be868\": context canceled" image="38.102.83.9:5001/openstack-k8s-operators/telemetry-operator:0311e5290726db3224383a9f7daf7d0c56839e0c" Nov 21 13:54:53 crc kubenswrapper[4675]: E1121 13:54:53.170592 4675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = reading blob sha256:9874a68632ac0f050a94f2a5598ce6867128dc558db9504cf8fa6720039be868: Get \"http://38.102.83.9:5001/v2/openstack-k8s-operators/telemetry-operator/blobs/sha256:9874a68632ac0f050a94f2a5598ce6867128dc558db9504cf8fa6720039be868\": context canceled" image="38.102.83.9:5001/openstack-k8s-operators/telemetry-operator:0311e5290726db3224383a9f7daf7d0c56839e0c" Nov 21 13:54:53 crc kubenswrapper[4675]: E1121 13:54:53.170910 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.9:5001/openstack-k8s-operators/telemetry-operator:0311e5290726db3224383a9f7daf7d0c56839e0c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bjhq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7fc59d4bfd-8swxd_openstack-operators(cfb97bdd-e357-475c-ab5c-184e50acb0dc): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:9874a68632ac0f050a94f2a5598ce6867128dc558db9504cf8fa6720039be868: Get \"http://38.102.83.9:5001/v2/openstack-k8s-operators/telemetry-operator/blobs/sha256:9874a68632ac0f050a94f2a5598ce6867128dc558db9504cf8fa6720039be868\": context canceled" logger="UnhandledError" Nov 21 13:54:53 crc kubenswrapper[4675]: E1121 13:54:53.267829 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\": context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 21 13:54:53 crc kubenswrapper[4675]: E1121 13:54:53.267991 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nhq7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-65k5k_openstack-operators(fec54436-1bcf-4e1e-af27-d86372b07bbe): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\": context canceled" logger="UnhandledError" Nov 21 13:54:53 crc kubenswrapper[4675]: E1121 13:54:53.269126 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \\\"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\\\": context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-65k5k" podUID="fec54436-1bcf-4e1e-af27-d86372b07bbe" Nov 21 13:54:53 crc kubenswrapper[4675]: E1121 13:54:53.564470 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13" Nov 21 13:54:53 crc kubenswrapper[4675]: E1121 13:54:53.564684 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jkgrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-bhcrz_openstack-operators(e8768c70-accf-460e-a781-b5d9eff26f2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:54:53 crc kubenswrapper[4675]: E1121 13:54:53.937170 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377" Nov 21 13:54:53 crc kubenswrapper[4675]: E1121 13:54:53.937368 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7hgzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5bfcdc958c-kxfcm_openstack-operators(2edf1cc1-0bd0-4329-969f-c2890b507972): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:54:54 crc kubenswrapper[4675]: E1121 13:54:54.054167 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-65k5k" podUID="fec54436-1bcf-4e1e-af27-d86372b07bbe" Nov 21 13:54:54 crc kubenswrapper[4675]: E1121 13:54:54.327423 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c" Nov 21 13:54:54 crc kubenswrapper[4675]: E1121 13:54:54.327730 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d7nq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-dcswk_openstack-operators(5661f60c-1801-419e-abaa-7f5e0825f148): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:54:54 crc kubenswrapper[4675]: E1121 13:54:54.789515 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7" Nov 21 13:54:54 crc kubenswrapper[4675]: E1121 13:54:54.789657 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l262s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-nmtt7_openstack-operators(693e699a-cdc4-4282-9ba6-6947c3e42726): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:54:55 crc kubenswrapper[4675]: E1121 13:54:55.264745 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b" Nov 21 13:54:55 crc kubenswrapper[4675]: E1121 13:54:55.265547 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-stjvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-p2fwv_openstack-operators(0aaf2b35-164b-400a-ad78-84961c2a599c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:54:59 crc kubenswrapper[4675]: E1121 13:54:59.274783 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04" Nov 21 13:54:59 crc kubenswrapper[4675]: E1121 13:54:59.275723 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zglcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-p4r2x_openstack-operators(eb3d3afa-eaa5-4271-8a33-45a009a9742a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:54:59 crc kubenswrapper[4675]: E1121 13:54:59.707437 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0" Nov 21 13:54:59 crc kubenswrapper[4675]: E1121 13:54:59.707690 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bxvtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-rpmxk_openstack-operators(e5d16474-89e2-4e35-8339-24afbb962e4b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:55:00 crc kubenswrapper[4675]: E1121 13:55:00.141834 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d" Nov 21 13:55:00 crc kubenswrapper[4675]: E1121 13:55:00.142250 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pzn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-2l49b_openstack-operators(dd5c12ec-ee26-458d-85f3-2b6bd7c021f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:55:01 crc kubenswrapper[4675]: I1121 13:55:01.549002 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc"] Nov 21 13:55:01 crc kubenswrapper[4675]: W1121 13:55:01.673241 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod376edcff_4439_418a_80e3_6f6309cdb8f0.slice/crio-e69a2cae689bc6c2e861d31c78ad417daf4e9941c7b0470a17587a6cb24b50fb WatchSource:0}: Error finding container e69a2cae689bc6c2e861d31c78ad417daf4e9941c7b0470a17587a6cb24b50fb: Status 404 returned error can't find the container with id e69a2cae689bc6c2e861d31c78ad417daf4e9941c7b0470a17587a6cb24b50fb Nov 21 13:55:01 crc kubenswrapper[4675]: E1121 13:55:01.685299 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f" Nov 21 13:55:01 crc kubenswrapper[4675]: E1121 13:55:01.685468 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mfpln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-55qzb_openstack-operators(fc4f9f2a-5093-4df0-919f-037e57993a93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:55:02 crc kubenswrapper[4675]: I1121 13:55:02.104561 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25"] Nov 21 13:55:02 crc kubenswrapper[4675]: I1121 13:55:02.118538 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-k4lls" event={"ID":"f3631bac-6fa8-4ad8-bbad-df880af19292","Type":"ContainerStarted","Data":"37bfd3bd39c773ce4e713765aca22e33a68226eec4aa9b23fb9f89be30995a59"} Nov 21 13:55:02 crc kubenswrapper[4675]: I1121 13:55:02.120044 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-726dt" event={"ID":"26fa3df8-f4d3-44d1-8e9b-c20dca446570","Type":"ContainerStarted","Data":"54a5d301cac1f4b0640a0816758ee5caae04b3250ce9ace95412b57be8d33414"} Nov 21 13:55:02 crc kubenswrapper[4675]: I1121 13:55:02.120896 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" event={"ID":"376edcff-4439-418a-80e3-6f6309cdb8f0","Type":"ContainerStarted","Data":"e69a2cae689bc6c2e861d31c78ad417daf4e9941c7b0470a17587a6cb24b50fb"} Nov 21 13:55:05 crc kubenswrapper[4675]: I1121 13:55:05.149320 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-4gf9h" event={"ID":"2808d52f-0a70-48df-9b55-052faa81f93c","Type":"ContainerStarted","Data":"942266a7afca36b9da95f9fbff2d3100846a509118da4af320924c2591d9e982"} Nov 21 13:55:05 crc kubenswrapper[4675]: I1121 13:55:05.152264 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-h9twj" event={"ID":"7e3588ab-94d7-482f-97c4-67d573181e2c","Type":"ContainerStarted","Data":"a627da9d8b49b782a48e9e7a182aeafa3bc22fb9a67d63ed2a33a11ee7d54b8d"} Nov 21 13:55:06 crc kubenswrapper[4675]: I1121 13:55:06.162303 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" event={"ID":"376edcff-4439-418a-80e3-6f6309cdb8f0","Type":"ContainerStarted","Data":"27067a95215d71a14b8d1c9672464ccdb944f4f782d04bdeac20fafc14a14ad5"} Nov 21 13:55:06 crc kubenswrapper[4675]: I1121 13:55:06.164013 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:55:06 crc kubenswrapper[4675]: I1121 13:55:06.171356 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-dd9xn" event={"ID":"45c2d5a9-a319-4012-91de-77769b6ad913","Type":"ContainerStarted","Data":"c1c8814e459db3e5e3a44e7b6ce2ac8a74ba8c719379c729c4bf5044b5d239b9"} Nov 21 13:55:06 crc kubenswrapper[4675]: I1121 13:55:06.174411 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" event={"ID":"77644f3e-1a90-4f49-a43c-b3d5b23c8184","Type":"ContainerStarted","Data":"a94f37e25d2cf3058cb37ec90773fb127cae7b9742d532e4cff4cb82cce107b9"} Nov 21 13:55:06 crc kubenswrapper[4675]: I1121 13:55:06.176382 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-46782" event={"ID":"27a202b7-1cf0-4dda-a010-6d59fbe881ed","Type":"ContainerStarted","Data":"5c9025f912fb53ea5230b0913cdc03fb1f2fcf0cf4b8a2247e3484fd2c2b12d9"} Nov 21 13:55:06 crc kubenswrapper[4675]: I1121 13:55:06.177986 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" event={"ID":"43da63e0-75a0-4e90-9e81-3b3be38a45b1","Type":"ContainerStarted","Data":"fca5fea5ee21cc135db77657acbd37f134c180257c9216e74b0134759ef6b9e5"} Nov 21 13:55:06 crc kubenswrapper[4675]: I1121 13:55:06.208501 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-hld5f" event={"ID":"a3c92b8e-62bc-4b54-b1ce-a32c275cd9ca","Type":"ContainerStarted","Data":"50648b309686aa62ff080fd2b3aa857c643f7b891b4e03b43508298216bc2620"} Nov 21 13:55:06 crc kubenswrapper[4675]: I1121 13:55:06.215192 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-lmpfg" event={"ID":"89aec3aa-b2d8-4702-b0fd-005c6d51c669","Type":"ContainerStarted","Data":"443f0a1ca17bddc3bd4468d3006989aa8f6fefde22b1c707f60ada79cad7a3f7"} Nov 21 13:55:11 crc kubenswrapper[4675]: E1121 13:55:11.033449 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2l49b" podUID="dd5c12ec-ee26-458d-85f3-2b6bd7c021f1" Nov 21 13:55:11 crc kubenswrapper[4675]: E1121 13:55:11.228798 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm" podUID="2edf1cc1-0bd0-4329-969f-c2890b507972" Nov 21 13:55:11 crc kubenswrapper[4675]: E1121 13:55:11.234503 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7" podUID="cc7542b4-b4d9-46e5-8819-784ec50c9c11" Nov 21 13:55:11 crc kubenswrapper[4675]: I1121 13:55:11.261989 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm" event={"ID":"2edf1cc1-0bd0-4329-969f-c2890b507972","Type":"ContainerStarted","Data":"4ff59d505ce41924880ef10ee559bbf34cd2a8abd2e22c2b8b655c4cb76df766"} Nov 21 13:55:11 crc kubenswrapper[4675]: I1121 13:55:11.264663 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7" event={"ID":"cc7542b4-b4d9-46e5-8819-784ec50c9c11","Type":"ContainerStarted","Data":"9b2801455f2e62931dd034b878685a13182d638767e52ba4a12aef4480bb7b59"} Nov 21 13:55:11 crc kubenswrapper[4675]: I1121 13:55:11.267057 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2l49b" event={"ID":"dd5c12ec-ee26-458d-85f3-2b6bd7c021f1","Type":"ContainerStarted","Data":"2ed5341b639b184622e77cc10aad13934bb88592e3313b5abef0cec6b177c42a"} Nov 21 13:55:11 crc kubenswrapper[4675]: I1121 13:55:11.295366 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" podStartSLOduration=42.29534057 podStartE2EDuration="42.29534057s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:55:06.206595842 +0000 UTC m=+1382.933010599" watchObservedRunningTime="2025-11-21 13:55:11.29534057 +0000 UTC m=+1388.021755307" Nov 21 13:55:11 crc kubenswrapper[4675]: E1121 13:55:11.549473 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-55qzb" podUID="fc4f9f2a-5093-4df0-919f-037e57993a93" Nov 21 13:55:11 crc kubenswrapper[4675]: E1121 13:55:11.816656 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7" podUID="693e699a-cdc4-4282-9ba6-6947c3e42726" Nov 21 13:55:11 crc kubenswrapper[4675]: E1121 13:55:11.954228 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk" podUID="5661f60c-1801-419e-abaa-7f5e0825f148" Nov 21 13:55:12 crc kubenswrapper[4675]: E1121 13:55:12.078350 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz" podUID="e8768c70-accf-460e-a781-b5d9eff26f2e" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.276388 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-65k5k" event={"ID":"fec54436-1bcf-4e1e-af27-d86372b07bbe","Type":"ContainerStarted","Data":"b8cfe7d2e0456e8421978c0c024d93a2985741c4a000f5afed282682ebcf088d"} Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.280309 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz" event={"ID":"e8768c70-accf-460e-a781-b5d9eff26f2e","Type":"ContainerStarted","Data":"d003b815d737978d8334b084ba654bf535cd035f0e8714553451f6e74d0bd902"} Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.283660 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk" event={"ID":"5661f60c-1801-419e-abaa-7f5e0825f148","Type":"ContainerStarted","Data":"485f78af53201734a0b44f8f60bed0b8641f2cd6b021d624f50b9420fecb3bab"} Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.293308 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" event={"ID":"43da63e0-75a0-4e90-9e81-3b3be38a45b1","Type":"ContainerStarted","Data":"1bceb039a2a99c5ad97deccd3c54ae35c4a8e4df858d14e326d3e799401815ab"} Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.294010 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.300735 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-lmpfg" event={"ID":"89aec3aa-b2d8-4702-b0fd-005c6d51c669","Type":"ContainerStarted","Data":"ec6b0a62d3051ce04206ba93a582d478f3ace2ea71863e2818b317d22fe745e8"} Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.302114 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.302152 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-lmpfg" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.303706 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-65k5k" podStartSLOduration=12.538271863 podStartE2EDuration="43.303677841s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="2025-11-21 13:54:40.319271103 +0000 UTC m=+1357.045685830" lastFinishedPulling="2025-11-21 13:55:11.084677081 +0000 UTC m=+1387.811091808" observedRunningTime="2025-11-21 13:55:12.299140898 +0000 UTC m=+1389.025555625" watchObservedRunningTime="2025-11-21 13:55:12.303677841 +0000 UTC m=+1389.030092568" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.304984 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-lmpfg" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.311089 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-55qzb" event={"ID":"fc4f9f2a-5093-4df0-919f-037e57993a93","Type":"ContainerStarted","Data":"c256d9c26551b25bb2f6f3cf5e7d677fad80f65262e3bd8c6777c63624d99131"} Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.328937 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-4gf9h" event={"ID":"2808d52f-0a70-48df-9b55-052faa81f93c","Type":"ContainerStarted","Data":"8b57a342c6322eb161b502aec66a525c5edd36580d040b3d117bf924790742a0"} Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.330014 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-4gf9h" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.338294 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-4gf9h" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.342372 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7" event={"ID":"693e699a-cdc4-4282-9ba6-6947c3e42726","Type":"ContainerStarted","Data":"bd1bffc9c61e1c04de7a5d0f679c874c144bce53e1a8db31463379a31376b79e"} Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.353677 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-dd9xn" event={"ID":"45c2d5a9-a319-4012-91de-77769b6ad913","Type":"ContainerStarted","Data":"6032525e77bed939c4e2f10456f13a18635f6201794f27655f1e03d7ca6b9b54"} Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.354712 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-dd9xn" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.359391 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-dd9xn" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.360670 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-qb7zq" podStartSLOduration=5.104410206 podStartE2EDuration="44.36064858s" podCreationTimestamp="2025-11-21 13:54:28 +0000 UTC" firstStartedPulling="2025-11-21 13:54:31.828619542 +0000 UTC m=+1348.555034259" lastFinishedPulling="2025-11-21 13:55:11.084857906 +0000 UTC m=+1387.811272633" observedRunningTime="2025-11-21 13:55:12.352550919 +0000 UTC m=+1389.078965646" watchObservedRunningTime="2025-11-21 13:55:12.36064858 +0000 UTC m=+1389.087063307" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.363601 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" event={"ID":"77644f3e-1a90-4f49-a43c-b3d5b23c8184","Type":"ContainerStarted","Data":"87fae2e572881f34f03d98df93647416c1a9a4600f8c9c297a30221744cf259c"} Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.368469 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-46782" event={"ID":"27a202b7-1cf0-4dda-a010-6d59fbe881ed","Type":"ContainerStarted","Data":"42e604681b594068d9fe0e54344a111533261cba9f9507ccd9b1ac23db6f073c"} Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.370242 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-46782" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.376844 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-46782" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.381748 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-h9twj" event={"ID":"7e3588ab-94d7-482f-97c4-67d573181e2c","Type":"ContainerStarted","Data":"72febd4b4bb624acc987903ae1004f7d68ee6f6895560cc912d641f4849224d4"} Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.383891 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-h9twj" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.393662 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-726dt" event={"ID":"26fa3df8-f4d3-44d1-8e9b-c20dca446570","Type":"ContainerStarted","Data":"0eb029d6bef0e63cf5bcb11fb2a5cf9bc88bf20cbc3acee060ede758276cd2cf"} Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.395117 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-726dt" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.404509 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-726dt" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.404720 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-h9twj" Nov 21 13:55:12 crc kubenswrapper[4675]: E1121 13:55:12.407259 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x" podUID="eb3d3afa-eaa5-4271-8a33-45a009a9742a" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.426085 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-lmpfg" podStartSLOduration=3.419899583 podStartE2EDuration="43.42605324s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="2025-11-21 13:54:31.058946537 +0000 UTC m=+1347.785361264" lastFinishedPulling="2025-11-21 13:55:11.065100194 +0000 UTC m=+1387.791514921" observedRunningTime="2025-11-21 13:55:12.423570008 +0000 UTC m=+1389.149984735" watchObservedRunningTime="2025-11-21 13:55:12.42605324 +0000 UTC m=+1389.152467967" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.486138 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-46782" podStartSLOduration=4.327055983 podStartE2EDuration="43.486118626s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="2025-11-21 13:54:31.973671616 +0000 UTC m=+1348.700086343" lastFinishedPulling="2025-11-21 13:55:11.132734259 +0000 UTC m=+1387.859148986" observedRunningTime="2025-11-21 13:55:12.462293703 +0000 UTC m=+1389.188708440" watchObservedRunningTime="2025-11-21 13:55:12.486118626 +0000 UTC m=+1389.212533353" Nov 21 13:55:12 crc kubenswrapper[4675]: E1121 13:55:12.495530 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk" podUID="e5d16474-89e2-4e35-8339-24afbb962e4b" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.569471 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-dd9xn" podStartSLOduration=4.134366999 podStartE2EDuration="44.569454253s" podCreationTimestamp="2025-11-21 13:54:28 +0000 UTC" firstStartedPulling="2025-11-21 13:54:30.682638561 +0000 UTC m=+1347.409053298" lastFinishedPulling="2025-11-21 13:55:11.117725825 +0000 UTC m=+1387.844140552" observedRunningTime="2025-11-21 13:55:12.565405882 +0000 UTC m=+1389.291820619" watchObservedRunningTime="2025-11-21 13:55:12.569454253 +0000 UTC m=+1389.295868980" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.572935 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-4gf9h" podStartSLOduration=4.508020418 podStartE2EDuration="44.572917879s" podCreationTimestamp="2025-11-21 13:54:28 +0000 UTC" firstStartedPulling="2025-11-21 13:54:31.053716076 +0000 UTC m=+1347.780130803" lastFinishedPulling="2025-11-21 13:55:11.118613537 +0000 UTC m=+1387.845028264" observedRunningTime="2025-11-21 13:55:12.536326527 +0000 UTC m=+1389.262741254" watchObservedRunningTime="2025-11-21 13:55:12.572917879 +0000 UTC m=+1389.299332606" Nov 21 13:55:12 crc kubenswrapper[4675]: E1121 13:55:12.601470 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv" podUID="0aaf2b35-164b-400a-ad78-84961c2a599c" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.621285 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-h9twj" podStartSLOduration=3.819911125 podStartE2EDuration="44.621269723s" podCreationTimestamp="2025-11-21 13:54:28 +0000 UTC" firstStartedPulling="2025-11-21 13:54:30.320445168 +0000 UTC m=+1347.046859895" lastFinishedPulling="2025-11-21 13:55:11.121803756 +0000 UTC m=+1387.848218493" observedRunningTime="2025-11-21 13:55:12.619860078 +0000 UTC m=+1389.346274805" watchObservedRunningTime="2025-11-21 13:55:12.621269723 +0000 UTC m=+1389.347684440" Nov 21 13:55:12 crc kubenswrapper[4675]: E1121 13:55:12.670840 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:9874a68632ac0f050a94f2a5598ce6867128dc558db9504cf8fa6720039be868: Get \\\"http://38.102.83.9:5001/v2/openstack-k8s-operators/telemetry-operator/blobs/sha256:9874a68632ac0f050a94f2a5598ce6867128dc558db9504cf8fa6720039be868\\\": context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd" podUID="cfb97bdd-e357-475c-ab5c-184e50acb0dc" Nov 21 13:55:12 crc kubenswrapper[4675]: I1121 13:55:12.685085 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-726dt" podStartSLOduration=4.308117388 podStartE2EDuration="44.685048892s" podCreationTimestamp="2025-11-21 13:54:28 +0000 UTC" firstStartedPulling="2025-11-21 13:54:30.682336034 +0000 UTC m=+1347.408750761" lastFinishedPulling="2025-11-21 13:55:11.059267538 +0000 UTC m=+1387.785682265" observedRunningTime="2025-11-21 13:55:12.673735201 +0000 UTC m=+1389.400149928" watchObservedRunningTime="2025-11-21 13:55:12.685048892 +0000 UTC m=+1389.411463619" Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.401906 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd" event={"ID":"cfb97bdd-e357-475c-ab5c-184e50acb0dc","Type":"ContainerStarted","Data":"639d32fc9dca71fc2a21d9738b6b262a8fe064225c9a778cfc5aac35a48e708a"} Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.404615 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm" event={"ID":"2edf1cc1-0bd0-4329-969f-c2890b507972","Type":"ContainerStarted","Data":"d9ca277428cf407ced26359a39841ededd2b1849a5875d95ed320dadb1c4e16c"} Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.404863 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm" Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.406241 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv" event={"ID":"0aaf2b35-164b-400a-ad78-84961c2a599c","Type":"ContainerStarted","Data":"c51dcf21fcd5d3ead43fd85293fe14836c69fe0fee272eaa0e3200e2bcf4efff"} Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.408106 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" event={"ID":"77644f3e-1a90-4f49-a43c-b3d5b23c8184","Type":"ContainerStarted","Data":"a2cd3fc8c32831664c2a032031dfebad47f9c1e0067322b2bd363d378d998cd9"} Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.408248 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.409841 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7" event={"ID":"cc7542b4-b4d9-46e5-8819-784ec50c9c11","Type":"ContainerStarted","Data":"e9d9f4f2acfad99f58b3a518848f077823f416ffe5976787ef30002e9ab4def0"} Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.409983 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7" Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.411249 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk" event={"ID":"e5d16474-89e2-4e35-8339-24afbb962e4b","Type":"ContainerStarted","Data":"367802964a9720b7c7f3a120c789792d1f4e8f2c62414b75850f3476204c0d57"} Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.412719 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2l49b" event={"ID":"dd5c12ec-ee26-458d-85f3-2b6bd7c021f1","Type":"ContainerStarted","Data":"eef4a0cdc98b8c75d7ada5beaba004c703b1c1772dc8792b32f9111e9b59a611"} Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.414867 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-k4lls" event={"ID":"f3631bac-6fa8-4ad8-bbad-df880af19292","Type":"ContainerStarted","Data":"07e20ba7a66c2db443888b9c9946e5725630d49400e836eb7e311c5c0fadffcd"} Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.415211 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-k4lls" Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.416508 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x" event={"ID":"eb3d3afa-eaa5-4271-8a33-45a009a9742a","Type":"ContainerStarted","Data":"67868fd66e7abb04832f9a8eec835654d64a109fb614898edb88affde828cb04"} Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.418773 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-hld5f" event={"ID":"a3c92b8e-62bc-4b54-b1ce-a32c275cd9ca","Type":"ContainerStarted","Data":"c59e5eead1598b3e1a9045d23e04dbc397a41118d31c7eb2424e54a2990dc239"} Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.419779 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-hld5f" Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.419813 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-k4lls" Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.421628 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-hld5f" Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.462841 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7" podStartSLOduration=4.325422431 podStartE2EDuration="44.462818758s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="2025-11-21 13:54:31.855788759 +0000 UTC m=+1348.582203496" lastFinishedPulling="2025-11-21 13:55:11.993185096 +0000 UTC m=+1388.719599823" observedRunningTime="2025-11-21 13:55:13.45927175 +0000 UTC m=+1390.185686477" watchObservedRunningTime="2025-11-21 13:55:13.462818758 +0000 UTC m=+1390.189233485" Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.491111 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-k4lls" podStartSLOduration=4.328939675 podStartE2EDuration="45.491094213s" podCreationTimestamp="2025-11-21 13:54:28 +0000 UTC" firstStartedPulling="2025-11-21 13:54:29.996828165 +0000 UTC m=+1346.723242892" lastFinishedPulling="2025-11-21 13:55:11.158982703 +0000 UTC m=+1387.885397430" observedRunningTime="2025-11-21 13:55:13.476855738 +0000 UTC m=+1390.203270465" watchObservedRunningTime="2025-11-21 13:55:13.491094213 +0000 UTC m=+1390.217508940" Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.551994 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" podStartSLOduration=39.223767346 podStartE2EDuration="44.5519808s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="2025-11-21 13:55:05.730709966 +0000 UTC m=+1382.457124693" lastFinishedPulling="2025-11-21 13:55:11.05892341 +0000 UTC m=+1387.785338147" observedRunningTime="2025-11-21 13:55:13.548699548 +0000 UTC m=+1390.275114275" watchObservedRunningTime="2025-11-21 13:55:13.5519808 +0000 UTC m=+1390.278395527" Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.554949 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-hld5f" podStartSLOduration=5.507846016 podStartE2EDuration="45.554931123s" podCreationTimestamp="2025-11-21 13:54:28 +0000 UTC" firstStartedPulling="2025-11-21 13:54:31.086498283 +0000 UTC m=+1347.812913010" lastFinishedPulling="2025-11-21 13:55:11.13358339 +0000 UTC m=+1387.859998117" observedRunningTime="2025-11-21 13:55:13.516770443 +0000 UTC m=+1390.243185170" watchObservedRunningTime="2025-11-21 13:55:13.554931123 +0000 UTC m=+1390.281345850" Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.613955 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm" podStartSLOduration=5.39638208 podStartE2EDuration="45.613940594s" podCreationTimestamp="2025-11-21 13:54:28 +0000 UTC" firstStartedPulling="2025-11-21 13:54:31.759787697 +0000 UTC m=+1348.486202434" lastFinishedPulling="2025-11-21 13:55:11.977346221 +0000 UTC m=+1388.703760948" observedRunningTime="2025-11-21 13:55:13.609549574 +0000 UTC m=+1390.335964301" watchObservedRunningTime="2025-11-21 13:55:13.613940594 +0000 UTC m=+1390.340355321" Nov 21 13:55:13 crc kubenswrapper[4675]: I1121 13:55:13.843278 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-5zhcc" Nov 21 13:55:15 crc kubenswrapper[4675]: I1121 13:55:15.462300 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2l49b" podStartSLOduration=7.406729568 podStartE2EDuration="46.462277252s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="2025-11-21 13:54:32.941794025 +0000 UTC m=+1349.668208752" lastFinishedPulling="2025-11-21 13:55:11.997341709 +0000 UTC m=+1388.723756436" observedRunningTime="2025-11-21 13:55:15.46177632 +0000 UTC m=+1392.188191047" watchObservedRunningTime="2025-11-21 13:55:15.462277252 +0000 UTC m=+1392.188691999" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.136047 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.136350 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.476099 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv" event={"ID":"0aaf2b35-164b-400a-ad78-84961c2a599c","Type":"ContainerStarted","Data":"048e4d6651c1f829e00e7c456afbf1a3f7e709fd03c634747cc0c103969fda23"} Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.477221 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.485641 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz" event={"ID":"e8768c70-accf-460e-a781-b5d9eff26f2e","Type":"ContainerStarted","Data":"56b62a98f01496f5989f1ec0ea9ce584a863cc0473f20cae359049f50b0c9ca3"} Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.486462 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.497420 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk" event={"ID":"5661f60c-1801-419e-abaa-7f5e0825f148","Type":"ContainerStarted","Data":"f2de9ab5f2a6c4fe0d01f4d5f58ff4445656bcca258a9da132e688b6b698afbd"} Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.498353 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.501803 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv" podStartSLOduration=3.566962976 podStartE2EDuration="47.501788439s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="2025-11-21 13:54:31.814732526 +0000 UTC m=+1348.541147253" lastFinishedPulling="2025-11-21 13:55:15.749557989 +0000 UTC m=+1392.475972716" observedRunningTime="2025-11-21 13:55:16.498701032 +0000 UTC m=+1393.225115759" watchObservedRunningTime="2025-11-21 13:55:16.501788439 +0000 UTC m=+1393.228203166" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.519255 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x" event={"ID":"eb3d3afa-eaa5-4271-8a33-45a009a9742a","Type":"ContainerStarted","Data":"b3149389626b334e17bbb864133d00128ce475866254cb772b5b9f37a7d01274"} Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.519936 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.522057 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk" event={"ID":"e5d16474-89e2-4e35-8339-24afbb962e4b","Type":"ContainerStarted","Data":"8b91670ec26099cb4c0c63c8b378d4f6be66952dbf07159f6729c7933b41fc88"} Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.523039 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.523687 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz" podStartSLOduration=4.018865004 podStartE2EDuration="47.523667834s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="2025-11-21 13:54:31.831026742 +0000 UTC m=+1348.557441469" lastFinishedPulling="2025-11-21 13:55:15.335829582 +0000 UTC m=+1392.062244299" observedRunningTime="2025-11-21 13:55:16.520928796 +0000 UTC m=+1393.247343533" watchObservedRunningTime="2025-11-21 13:55:16.523667834 +0000 UTC m=+1393.250082561" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.527055 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-55qzb" event={"ID":"fc4f9f2a-5093-4df0-919f-037e57993a93","Type":"ContainerStarted","Data":"f0d33fa487daaf95a84696373d0b03b378eecf158500d298533387b29cd76783"} Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.527365 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-55qzb" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.529411 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7" event={"ID":"693e699a-cdc4-4282-9ba6-6947c3e42726","Type":"ContainerStarted","Data":"5917d3c58edc6605118a0ec03951dc49d05a16a073ab589cb44a2830a2010d3f"} Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.529724 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.537584 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk" podStartSLOduration=4.215376641 podStartE2EDuration="47.537572401s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="2025-11-21 13:54:32.01276927 +0000 UTC m=+1348.739183997" lastFinishedPulling="2025-11-21 13:55:15.33496502 +0000 UTC m=+1392.061379757" observedRunningTime="2025-11-21 13:55:16.534567226 +0000 UTC m=+1393.260981953" watchObservedRunningTime="2025-11-21 13:55:16.537572401 +0000 UTC m=+1393.263987118" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.551038 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7" podStartSLOduration=3.80674036 podStartE2EDuration="47.551019656s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="2025-11-21 13:54:31.746011154 +0000 UTC m=+1348.472425871" lastFinishedPulling="2025-11-21 13:55:15.49029044 +0000 UTC m=+1392.216705167" observedRunningTime="2025-11-21 13:55:16.548265017 +0000 UTC m=+1393.274679744" watchObservedRunningTime="2025-11-21 13:55:16.551019656 +0000 UTC m=+1393.277434383" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.574466 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk" podStartSLOduration=12.143838354 podStartE2EDuration="47.574446149s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="2025-11-21 13:54:40.318894243 +0000 UTC m=+1357.045308970" lastFinishedPulling="2025-11-21 13:55:15.749502038 +0000 UTC m=+1392.475916765" observedRunningTime="2025-11-21 13:55:16.565115637 +0000 UTC m=+1393.291530364" watchObservedRunningTime="2025-11-21 13:55:16.574446149 +0000 UTC m=+1393.300860876" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.583886 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-55qzb" podStartSLOduration=5.189813096 podStartE2EDuration="47.583866804s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="2025-11-21 13:54:32.941722803 +0000 UTC m=+1349.668137530" lastFinishedPulling="2025-11-21 13:55:15.335776511 +0000 UTC m=+1392.062191238" observedRunningTime="2025-11-21 13:55:16.578398878 +0000 UTC m=+1393.304813605" watchObservedRunningTime="2025-11-21 13:55:16.583866804 +0000 UTC m=+1393.310281531" Nov 21 13:55:16 crc kubenswrapper[4675]: I1121 13:55:16.597847 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x" podStartSLOduration=3.700290127 podStartE2EDuration="47.597830292s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="2025-11-21 13:54:31.853155443 +0000 UTC m=+1348.579570170" lastFinishedPulling="2025-11-21 13:55:15.750695608 +0000 UTC m=+1392.477110335" observedRunningTime="2025-11-21 13:55:16.595234257 +0000 UTC m=+1393.321648984" watchObservedRunningTime="2025-11-21 13:55:16.597830292 +0000 UTC m=+1393.324245019" Nov 21 13:55:18 crc kubenswrapper[4675]: I1121 13:55:18.549523 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd" event={"ID":"cfb97bdd-e357-475c-ab5c-184e50acb0dc","Type":"ContainerStarted","Data":"809a918e4beb48aa4eda07bf3dcee7a9d302f7f5199e87f42739fa7ee3ba1bf1"} Nov 21 13:55:18 crc kubenswrapper[4675]: I1121 13:55:18.550252 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd" Nov 21 13:55:18 crc kubenswrapper[4675]: I1121 13:55:18.567656 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd" podStartSLOduration=3.753361291 podStartE2EDuration="49.567633987s" podCreationTimestamp="2025-11-21 13:54:29 +0000 UTC" firstStartedPulling="2025-11-21 13:54:32.008708019 +0000 UTC m=+1348.735122746" lastFinishedPulling="2025-11-21 13:55:17.822980715 +0000 UTC m=+1394.549395442" observedRunningTime="2025-11-21 13:55:18.566649402 +0000 UTC m=+1395.293064129" watchObservedRunningTime="2025-11-21 13:55:18.567633987 +0000 UTC m=+1395.294048734" Nov 21 13:55:19 crc kubenswrapper[4675]: I1121 13:55:19.367128 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kxfcm" Nov 21 13:55:19 crc kubenswrapper[4675]: I1121 13:55:19.634266 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-lf6g7" Nov 21 13:55:20 crc kubenswrapper[4675]: I1121 13:55:20.050226 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dcswk" Nov 21 13:55:20 crc kubenswrapper[4675]: I1121 13:55:20.122467 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2l49b" Nov 21 13:55:20 crc kubenswrapper[4675]: I1121 13:55:20.128813 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-2l49b" Nov 21 13:55:20 crc kubenswrapper[4675]: I1121 13:55:20.171331 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-55qzb" Nov 21 13:55:23 crc kubenswrapper[4675]: I1121 13:55:23.625878 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-cwz25" Nov 21 13:55:29 crc kubenswrapper[4675]: I1121 13:55:29.658632 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-p4r2x" Nov 21 13:55:29 crc kubenswrapper[4675]: I1121 13:55:29.837347 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-nmtt7" Nov 21 13:55:29 crc kubenswrapper[4675]: I1121 13:55:29.867611 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bhcrz" Nov 21 13:55:29 crc kubenswrapper[4675]: I1121 13:55:29.981471 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-p2fwv" Nov 21 13:55:30 crc kubenswrapper[4675]: I1121 13:55:30.101783 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-8swxd" Nov 21 13:55:30 crc kubenswrapper[4675]: I1121 13:55:30.380854 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rpmxk" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.136306 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.136903 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.136996 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.137689 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f2f7ddee4baba66416eb7233c361ee3ddc2444a945155131226bb7f36fc9024"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.137740 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://4f2f7ddee4baba66416eb7233c361ee3ddc2444a945155131226bb7f36fc9024" gracePeriod=600 Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.616901 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pcqtd"] Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.619196 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pcqtd" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.623102 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.626946 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.627160 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.627296 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-g5h6s" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.637616 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pcqtd"] Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.662800 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvwph\" (UniqueName: \"kubernetes.io/projected/7ed87882-a8fb-4fca-95f2-1b087b762b4e-kube-api-access-zvwph\") pod \"dnsmasq-dns-675f4bcbfc-pcqtd\" (UID: \"7ed87882-a8fb-4fca-95f2-1b087b762b4e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pcqtd" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.662885 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed87882-a8fb-4fca-95f2-1b087b762b4e-config\") pod \"dnsmasq-dns-675f4bcbfc-pcqtd\" (UID: \"7ed87882-a8fb-4fca-95f2-1b087b762b4e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pcqtd" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.690102 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-q799p"] Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.691962 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.695820 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.703682 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-q799p"] Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.764996 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568ef7bd-4d05-4b69-b17c-e1610ddff47b-config\") pod \"dnsmasq-dns-78dd6ddcc-q799p\" (UID: \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.765093 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvwph\" (UniqueName: \"kubernetes.io/projected/7ed87882-a8fb-4fca-95f2-1b087b762b4e-kube-api-access-zvwph\") pod \"dnsmasq-dns-675f4bcbfc-pcqtd\" (UID: \"7ed87882-a8fb-4fca-95f2-1b087b762b4e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pcqtd" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.765130 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed87882-a8fb-4fca-95f2-1b087b762b4e-config\") pod \"dnsmasq-dns-675f4bcbfc-pcqtd\" (UID: \"7ed87882-a8fb-4fca-95f2-1b087b762b4e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pcqtd" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.765170 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj9nk\" (UniqueName: \"kubernetes.io/projected/568ef7bd-4d05-4b69-b17c-e1610ddff47b-kube-api-access-zj9nk\") pod \"dnsmasq-dns-78dd6ddcc-q799p\" (UID: \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.765287 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568ef7bd-4d05-4b69-b17c-e1610ddff47b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-q799p\" (UID: \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.766526 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed87882-a8fb-4fca-95f2-1b087b762b4e-config\") pod \"dnsmasq-dns-675f4bcbfc-pcqtd\" (UID: \"7ed87882-a8fb-4fca-95f2-1b087b762b4e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pcqtd" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.806436 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvwph\" (UniqueName: \"kubernetes.io/projected/7ed87882-a8fb-4fca-95f2-1b087b762b4e-kube-api-access-zvwph\") pod \"dnsmasq-dns-675f4bcbfc-pcqtd\" (UID: \"7ed87882-a8fb-4fca-95f2-1b087b762b4e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pcqtd" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.812707 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="4f2f7ddee4baba66416eb7233c361ee3ddc2444a945155131226bb7f36fc9024" exitCode=0 Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.812753 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"4f2f7ddee4baba66416eb7233c361ee3ddc2444a945155131226bb7f36fc9024"} Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.812788 4675 scope.go:117] "RemoveContainer" containerID="fd8ece146e7469ff47abc44df983434b24140bc8b8a19319d303006a9e5badd2" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.867249 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568ef7bd-4d05-4b69-b17c-e1610ddff47b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-q799p\" (UID: \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.867336 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568ef7bd-4d05-4b69-b17c-e1610ddff47b-config\") pod \"dnsmasq-dns-78dd6ddcc-q799p\" (UID: \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.867399 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj9nk\" (UniqueName: \"kubernetes.io/projected/568ef7bd-4d05-4b69-b17c-e1610ddff47b-kube-api-access-zj9nk\") pod \"dnsmasq-dns-78dd6ddcc-q799p\" (UID: \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.869328 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568ef7bd-4d05-4b69-b17c-e1610ddff47b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-q799p\" (UID: \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.869902 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568ef7bd-4d05-4b69-b17c-e1610ddff47b-config\") pod \"dnsmasq-dns-78dd6ddcc-q799p\" (UID: \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.890802 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj9nk\" (UniqueName: \"kubernetes.io/projected/568ef7bd-4d05-4b69-b17c-e1610ddff47b-kube-api-access-zj9nk\") pod \"dnsmasq-dns-78dd6ddcc-q799p\" (UID: \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" Nov 21 13:55:46 crc kubenswrapper[4675]: I1121 13:55:46.950504 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pcqtd" Nov 21 13:55:47 crc kubenswrapper[4675]: I1121 13:55:47.010785 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" Nov 21 13:55:47 crc kubenswrapper[4675]: I1121 13:55:47.410490 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-q799p"] Nov 21 13:55:47 crc kubenswrapper[4675]: I1121 13:55:47.714389 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pcqtd"] Nov 21 13:55:47 crc kubenswrapper[4675]: W1121 13:55:47.717830 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ed87882_a8fb_4fca_95f2_1b087b762b4e.slice/crio-522faa0d4325dfa595e4cc3cd7bd17f11e1c617fcf64e2e49568170fcef45f55 WatchSource:0}: Error finding container 522faa0d4325dfa595e4cc3cd7bd17f11e1c617fcf64e2e49568170fcef45f55: Status 404 returned error can't find the container with id 522faa0d4325dfa595e4cc3cd7bd17f11e1c617fcf64e2e49568170fcef45f55 Nov 21 13:55:47 crc kubenswrapper[4675]: I1121 13:55:47.825135 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4"} Nov 21 13:55:47 crc kubenswrapper[4675]: I1121 13:55:47.827175 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pcqtd" event={"ID":"7ed87882-a8fb-4fca-95f2-1b087b762b4e","Type":"ContainerStarted","Data":"522faa0d4325dfa595e4cc3cd7bd17f11e1c617fcf64e2e49568170fcef45f55"} Nov 21 13:55:47 crc kubenswrapper[4675]: I1121 13:55:47.828612 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" event={"ID":"568ef7bd-4d05-4b69-b17c-e1610ddff47b","Type":"ContainerStarted","Data":"b48c250d5924ffa197bc61a0febd13110a8fd2500b4ddab51661bfafb4b08b31"} Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.597075 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pcqtd"] Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.611206 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-8q49k"] Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.613586 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.623103 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-8q49k"] Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.629305 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56cb8af6-449e-48fd-aa91-bb358634ff4a-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-8q49k\" (UID: \"56cb8af6-449e-48fd-aa91-bb358634ff4a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.629447 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56cb8af6-449e-48fd-aa91-bb358634ff4a-config\") pod \"dnsmasq-dns-5ccc8479f9-8q49k\" (UID: \"56cb8af6-449e-48fd-aa91-bb358634ff4a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.629511 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj66l\" (UniqueName: \"kubernetes.io/projected/56cb8af6-449e-48fd-aa91-bb358634ff4a-kube-api-access-hj66l\") pod \"dnsmasq-dns-5ccc8479f9-8q49k\" (UID: \"56cb8af6-449e-48fd-aa91-bb358634ff4a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.730991 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56cb8af6-449e-48fd-aa91-bb358634ff4a-config\") pod \"dnsmasq-dns-5ccc8479f9-8q49k\" (UID: \"56cb8af6-449e-48fd-aa91-bb358634ff4a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.731093 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj66l\" (UniqueName: \"kubernetes.io/projected/56cb8af6-449e-48fd-aa91-bb358634ff4a-kube-api-access-hj66l\") pod \"dnsmasq-dns-5ccc8479f9-8q49k\" (UID: \"56cb8af6-449e-48fd-aa91-bb358634ff4a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.731183 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56cb8af6-449e-48fd-aa91-bb358634ff4a-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-8q49k\" (UID: \"56cb8af6-449e-48fd-aa91-bb358634ff4a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.734345 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56cb8af6-449e-48fd-aa91-bb358634ff4a-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-8q49k\" (UID: \"56cb8af6-449e-48fd-aa91-bb358634ff4a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.740663 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56cb8af6-449e-48fd-aa91-bb358634ff4a-config\") pod \"dnsmasq-dns-5ccc8479f9-8q49k\" (UID: \"56cb8af6-449e-48fd-aa91-bb358634ff4a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.772646 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj66l\" (UniqueName: \"kubernetes.io/projected/56cb8af6-449e-48fd-aa91-bb358634ff4a-kube-api-access-hj66l\") pod \"dnsmasq-dns-5ccc8479f9-8q49k\" (UID: \"56cb8af6-449e-48fd-aa91-bb358634ff4a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.938097 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-q799p"] Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.953431 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.977222 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q4w85"] Nov 21 13:55:49 crc kubenswrapper[4675]: I1121 13:55:49.979025 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.043961 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q4w85"] Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.144736 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ddf172-a0a8-40ea-a447-36295e8cfb52-config\") pod \"dnsmasq-dns-57d769cc4f-q4w85\" (UID: \"88ddf172-a0a8-40ea-a447-36295e8cfb52\") " pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.144858 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88ddf172-a0a8-40ea-a447-36295e8cfb52-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-q4w85\" (UID: \"88ddf172-a0a8-40ea-a447-36295e8cfb52\") " pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.144935 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq6hc\" (UniqueName: \"kubernetes.io/projected/88ddf172-a0a8-40ea-a447-36295e8cfb52-kube-api-access-lq6hc\") pod \"dnsmasq-dns-57d769cc4f-q4w85\" (UID: \"88ddf172-a0a8-40ea-a447-36295e8cfb52\") " pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.247343 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq6hc\" (UniqueName: \"kubernetes.io/projected/88ddf172-a0a8-40ea-a447-36295e8cfb52-kube-api-access-lq6hc\") pod \"dnsmasq-dns-57d769cc4f-q4w85\" (UID: \"88ddf172-a0a8-40ea-a447-36295e8cfb52\") " pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.247453 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ddf172-a0a8-40ea-a447-36295e8cfb52-config\") pod \"dnsmasq-dns-57d769cc4f-q4w85\" (UID: \"88ddf172-a0a8-40ea-a447-36295e8cfb52\") " pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.247537 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88ddf172-a0a8-40ea-a447-36295e8cfb52-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-q4w85\" (UID: \"88ddf172-a0a8-40ea-a447-36295e8cfb52\") " pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.248749 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88ddf172-a0a8-40ea-a447-36295e8cfb52-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-q4w85\" (UID: \"88ddf172-a0a8-40ea-a447-36295e8cfb52\") " pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.252007 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ddf172-a0a8-40ea-a447-36295e8cfb52-config\") pod \"dnsmasq-dns-57d769cc4f-q4w85\" (UID: \"88ddf172-a0a8-40ea-a447-36295e8cfb52\") " pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.268602 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq6hc\" (UniqueName: \"kubernetes.io/projected/88ddf172-a0a8-40ea-a447-36295e8cfb52-kube-api-access-lq6hc\") pod \"dnsmasq-dns-57d769cc4f-q4w85\" (UID: \"88ddf172-a0a8-40ea-a447-36295e8cfb52\") " pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.357989 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.739038 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.744321 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.747080 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.747266 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.747411 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dd5hp" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.748043 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.748308 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.749322 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.749664 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.770757 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 13:55:50 crc kubenswrapper[4675]: W1121 13:55:50.779821 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56cb8af6_449e_48fd_aa91_bb358634ff4a.slice/crio-58100948f2bd107e1b03acb3aa535ac8f453ab98e7a7b730da66c62bfeedd6dc WatchSource:0}: Error finding container 58100948f2bd107e1b03acb3aa535ac8f453ab98e7a7b730da66c62bfeedd6dc: Status 404 returned error can't find the container with id 58100948f2bd107e1b03acb3aa535ac8f453ab98e7a7b730da66c62bfeedd6dc Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.780749 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-8q49k"] Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.866215 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.866583 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mlzx\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-kube-api-access-9mlzx\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.866696 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.866773 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.866807 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.869317 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.869384 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.869419 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.869465 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.869555 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.869621 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.885698 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" event={"ID":"56cb8af6-449e-48fd-aa91-bb358634ff4a","Type":"ContainerStarted","Data":"58100948f2bd107e1b03acb3aa535ac8f453ab98e7a7b730da66c62bfeedd6dc"} Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.912768 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q4w85"] Nov 21 13:55:50 crc kubenswrapper[4675]: W1121 13:55:50.916040 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88ddf172_a0a8_40ea_a447_36295e8cfb52.slice/crio-dae11d333459279551f14aa1ae5fff16f42b8fb2df3b31c50b7bd98534542cca WatchSource:0}: Error finding container dae11d333459279551f14aa1ae5fff16f42b8fb2df3b31c50b7bd98534542cca: Status 404 returned error can't find the container with id dae11d333459279551f14aa1ae5fff16f42b8fb2df3b31c50b7bd98534542cca Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.970783 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.970847 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.970875 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.970910 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.970942 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.970963 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.971009 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.971048 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mlzx\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-kube-api-access-9mlzx\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.971117 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.971134 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.971155 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.972017 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.972328 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.973754 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.973786 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.974836 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.975387 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.980037 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.984122 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.986618 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.988222 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:50 crc kubenswrapper[4675]: I1121 13:55:50.989423 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mlzx\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-kube-api-access-9mlzx\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.030786 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.126175 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.128056 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.138517 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.138892 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.139110 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.139256 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.139654 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4nr7c" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.139859 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.140012 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.152419 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.152944 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.276403 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-config-data\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.276498 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.276538 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.276582 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.276606 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.276649 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.276678 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.276703 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb2s5\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-kube-api-access-pb2s5\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.276777 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.276799 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.276831 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.378944 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb2s5\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-kube-api-access-pb2s5\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.379278 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.379302 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.379364 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.379404 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-config-data\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.380141 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.382804 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.382872 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.382927 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.383002 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.383027 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.383087 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.383125 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.383865 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-config-data\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.384168 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.384349 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.387346 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.387736 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.393130 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.397818 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb2s5\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-kube-api-access-pb2s5\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.401783 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.401884 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.441840 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.470578 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.724266 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 13:55:51 crc kubenswrapper[4675]: I1121 13:55:51.900769 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" event={"ID":"88ddf172-a0a8-40ea-a447-36295e8cfb52","Type":"ContainerStarted","Data":"dae11d333459279551f14aa1ae5fff16f42b8fb2df3b31c50b7bd98534542cca"} Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.369767 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.375023 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.377159 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.382183 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-v6jln" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.382400 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.383394 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.385756 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.386816 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.510158 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.510211 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/05bf6265-2f8a-4d78-9f5a-05304816937d-config-data-default\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.510246 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/05bf6265-2f8a-4d78-9f5a-05304816937d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.510283 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05bf6265-2f8a-4d78-9f5a-05304816937d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.510431 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n29z\" (UniqueName: \"kubernetes.io/projected/05bf6265-2f8a-4d78-9f5a-05304816937d-kube-api-access-2n29z\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.510572 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/05bf6265-2f8a-4d78-9f5a-05304816937d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.510605 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bf6265-2f8a-4d78-9f5a-05304816937d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.510943 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/05bf6265-2f8a-4d78-9f5a-05304816937d-kolla-config\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.612167 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/05bf6265-2f8a-4d78-9f5a-05304816937d-kolla-config\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.612269 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.612290 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/05bf6265-2f8a-4d78-9f5a-05304816937d-config-data-default\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.612307 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/05bf6265-2f8a-4d78-9f5a-05304816937d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.612329 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05bf6265-2f8a-4d78-9f5a-05304816937d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.612346 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n29z\" (UniqueName: \"kubernetes.io/projected/05bf6265-2f8a-4d78-9f5a-05304816937d-kube-api-access-2n29z\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.612388 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/05bf6265-2f8a-4d78-9f5a-05304816937d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.612401 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bf6265-2f8a-4d78-9f5a-05304816937d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.613742 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/05bf6265-2f8a-4d78-9f5a-05304816937d-kolla-config\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.614007 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/05bf6265-2f8a-4d78-9f5a-05304816937d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.614306 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.618995 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/05bf6265-2f8a-4d78-9f5a-05304816937d-config-data-default\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.620291 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05bf6265-2f8a-4d78-9f5a-05304816937d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.621052 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05bf6265-2f8a-4d78-9f5a-05304816937d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.641458 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/05bf6265-2f8a-4d78-9f5a-05304816937d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.645694 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n29z\" (UniqueName: \"kubernetes.io/projected/05bf6265-2f8a-4d78-9f5a-05304816937d-kube-api-access-2n29z\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.659155 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"05bf6265-2f8a-4d78-9f5a-05304816937d\") " pod="openstack/openstack-galera-0" Nov 21 13:55:52 crc kubenswrapper[4675]: I1121 13:55:52.703773 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.076024 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.077490 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.079886 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-4dq8w" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.080281 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.080465 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.088943 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.225834 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae83905-939b-4ae5-bab9-993356ce17b8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.225913 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8ae83905-939b-4ae5-bab9-993356ce17b8-kolla-config\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.226051 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae83905-939b-4ae5-bab9-993356ce17b8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.226103 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwzp4\" (UniqueName: \"kubernetes.io/projected/8ae83905-939b-4ae5-bab9-993356ce17b8-kube-api-access-xwzp4\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.226182 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ae83905-939b-4ae5-bab9-993356ce17b8-config-data\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.244341 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.246689 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.250901 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.250901 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-s4fv5" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.251336 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.251149 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.263044 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.328336 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae83905-939b-4ae5-bab9-993356ce17b8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.328719 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwzp4\" (UniqueName: \"kubernetes.io/projected/8ae83905-939b-4ae5-bab9-993356ce17b8-kube-api-access-xwzp4\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.328871 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ae83905-939b-4ae5-bab9-993356ce17b8-config-data\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.329053 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae83905-939b-4ae5-bab9-993356ce17b8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.329198 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8ae83905-939b-4ae5-bab9-993356ce17b8-kolla-config\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.330336 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8ae83905-939b-4ae5-bab9-993356ce17b8-kolla-config\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.332205 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ae83905-939b-4ae5-bab9-993356ce17b8-config-data\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.350970 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae83905-939b-4ae5-bab9-993356ce17b8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.351528 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae83905-939b-4ae5-bab9-993356ce17b8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.364864 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwzp4\" (UniqueName: \"kubernetes.io/projected/8ae83905-939b-4ae5-bab9-993356ce17b8-kube-api-access-xwzp4\") pod \"memcached-0\" (UID: \"8ae83905-939b-4ae5-bab9-993356ce17b8\") " pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.402982 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.430909 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9adb63e-74d2-48f6-b639-4b22def78e35-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.431043 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.431109 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c9adb63e-74d2-48f6-b639-4b22def78e35-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.431143 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dww4\" (UniqueName: \"kubernetes.io/projected/c9adb63e-74d2-48f6-b639-4b22def78e35-kube-api-access-2dww4\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.431178 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9adb63e-74d2-48f6-b639-4b22def78e35-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.431654 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9adb63e-74d2-48f6-b639-4b22def78e35-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.431708 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c9adb63e-74d2-48f6-b639-4b22def78e35-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.431862 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c9adb63e-74d2-48f6-b639-4b22def78e35-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.533269 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9adb63e-74d2-48f6-b639-4b22def78e35-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.533349 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c9adb63e-74d2-48f6-b639-4b22def78e35-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.534306 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c9adb63e-74d2-48f6-b639-4b22def78e35-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.534348 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9adb63e-74d2-48f6-b639-4b22def78e35-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.534389 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.534437 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c9adb63e-74d2-48f6-b639-4b22def78e35-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.534466 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dww4\" (UniqueName: \"kubernetes.io/projected/c9adb63e-74d2-48f6-b639-4b22def78e35-kube-api-access-2dww4\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.534497 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9adb63e-74d2-48f6-b639-4b22def78e35-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.535038 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c9adb63e-74d2-48f6-b639-4b22def78e35-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.535260 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.535903 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c9adb63e-74d2-48f6-b639-4b22def78e35-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.536937 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c9adb63e-74d2-48f6-b639-4b22def78e35-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.538240 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9adb63e-74d2-48f6-b639-4b22def78e35-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.539026 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9adb63e-74d2-48f6-b639-4b22def78e35-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.545260 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9adb63e-74d2-48f6-b639-4b22def78e35-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.553566 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dww4\" (UniqueName: \"kubernetes.io/projected/c9adb63e-74d2-48f6-b639-4b22def78e35-kube-api-access-2dww4\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.562739 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c9adb63e-74d2-48f6-b639-4b22def78e35\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:54 crc kubenswrapper[4675]: I1121 13:55:54.573942 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 21 13:55:56 crc kubenswrapper[4675]: I1121 13:55:56.319646 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 13:55:56 crc kubenswrapper[4675]: I1121 13:55:56.322000 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 13:55:56 crc kubenswrapper[4675]: I1121 13:55:56.326462 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-s77wt" Nov 21 13:55:56 crc kubenswrapper[4675]: I1121 13:55:56.361056 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 13:55:56 crc kubenswrapper[4675]: I1121 13:55:56.474221 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx6ph\" (UniqueName: \"kubernetes.io/projected/8b27bcc8-0305-4074-8d8d-9bb6e33cf000-kube-api-access-gx6ph\") pod \"kube-state-metrics-0\" (UID: \"8b27bcc8-0305-4074-8d8d-9bb6e33cf000\") " pod="openstack/kube-state-metrics-0" Nov 21 13:55:56 crc kubenswrapper[4675]: I1121 13:55:56.575727 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx6ph\" (UniqueName: \"kubernetes.io/projected/8b27bcc8-0305-4074-8d8d-9bb6e33cf000-kube-api-access-gx6ph\") pod \"kube-state-metrics-0\" (UID: \"8b27bcc8-0305-4074-8d8d-9bb6e33cf000\") " pod="openstack/kube-state-metrics-0" Nov 21 13:55:56 crc kubenswrapper[4675]: I1121 13:55:56.617987 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx6ph\" (UniqueName: \"kubernetes.io/projected/8b27bcc8-0305-4074-8d8d-9bb6e33cf000-kube-api-access-gx6ph\") pod \"kube-state-metrics-0\" (UID: \"8b27bcc8-0305-4074-8d8d-9bb6e33cf000\") " pod="openstack/kube-state-metrics-0" Nov 21 13:55:56 crc kubenswrapper[4675]: I1121 13:55:56.656502 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 13:55:56 crc kubenswrapper[4675]: I1121 13:55:56.965584 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh"] Nov 21 13:55:56 crc kubenswrapper[4675]: I1121 13:55:56.967310 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh" Nov 21 13:55:56 crc kubenswrapper[4675]: I1121 13:55:56.969402 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-v6ztc" Nov 21 13:55:56 crc kubenswrapper[4675]: I1121 13:55:56.970203 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Nov 21 13:55:56 crc kubenswrapper[4675]: I1121 13:55:56.974363 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh"] Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.084008 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6f8b49-15cf-404d-8bda-1ae7a7292d2b-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-mdzgh\" (UID: \"1d6f8b49-15cf-404d-8bda-1ae7a7292d2b\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.084421 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqv57\" (UniqueName: \"kubernetes.io/projected/1d6f8b49-15cf-404d-8bda-1ae7a7292d2b-kube-api-access-zqv57\") pod \"observability-ui-dashboards-7d5fb4cbfb-mdzgh\" (UID: \"1d6f8b49-15cf-404d-8bda-1ae7a7292d2b\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.186239 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqv57\" (UniqueName: \"kubernetes.io/projected/1d6f8b49-15cf-404d-8bda-1ae7a7292d2b-kube-api-access-zqv57\") pod \"observability-ui-dashboards-7d5fb4cbfb-mdzgh\" (UID: \"1d6f8b49-15cf-404d-8bda-1ae7a7292d2b\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.186395 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6f8b49-15cf-404d-8bda-1ae7a7292d2b-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-mdzgh\" (UID: \"1d6f8b49-15cf-404d-8bda-1ae7a7292d2b\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh" Nov 21 13:55:57 crc kubenswrapper[4675]: E1121 13:55:57.186531 4675 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Nov 21 13:55:57 crc kubenswrapper[4675]: E1121 13:55:57.186584 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d6f8b49-15cf-404d-8bda-1ae7a7292d2b-serving-cert podName:1d6f8b49-15cf-404d-8bda-1ae7a7292d2b nodeName:}" failed. No retries permitted until 2025-11-21 13:55:57.68656504 +0000 UTC m=+1434.412979767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d6f8b49-15cf-404d-8bda-1ae7a7292d2b-serving-cert") pod "observability-ui-dashboards-7d5fb4cbfb-mdzgh" (UID: "1d6f8b49-15cf-404d-8bda-1ae7a7292d2b") : secret "observability-ui-dashboards" not found Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.215121 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqv57\" (UniqueName: \"kubernetes.io/projected/1d6f8b49-15cf-404d-8bda-1ae7a7292d2b-kube-api-access-zqv57\") pod \"observability-ui-dashboards-7d5fb4cbfb-mdzgh\" (UID: \"1d6f8b49-15cf-404d-8bda-1ae7a7292d2b\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.333630 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5968f57749-wss5l"] Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.338507 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.354395 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5968f57749-wss5l"] Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.495245 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea7ba623-8dba-491e-b1aa-c55c918df93c-console-serving-cert\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.495574 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea7ba623-8dba-491e-b1aa-c55c918df93c-service-ca\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.495706 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5qnl\" (UniqueName: \"kubernetes.io/projected/ea7ba623-8dba-491e-b1aa-c55c918df93c-kube-api-access-g5qnl\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.495868 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea7ba623-8dba-491e-b1aa-c55c918df93c-oauth-serving-cert\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.496000 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea7ba623-8dba-491e-b1aa-c55c918df93c-console-config\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.496249 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7ba623-8dba-491e-b1aa-c55c918df93c-trusted-ca-bundle\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.496412 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea7ba623-8dba-491e-b1aa-c55c918df93c-console-oauth-config\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.540867 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.544594 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.553358 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.553495 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.553572 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.553518 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.553909 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8mjkk" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.561712 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.562798 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.604121 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea7ba623-8dba-491e-b1aa-c55c918df93c-oauth-serving-cert\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.604178 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea7ba623-8dba-491e-b1aa-c55c918df93c-console-config\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.604267 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7ba623-8dba-491e-b1aa-c55c918df93c-trusted-ca-bundle\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.604308 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea7ba623-8dba-491e-b1aa-c55c918df93c-console-oauth-config\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.604366 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea7ba623-8dba-491e-b1aa-c55c918df93c-console-serving-cert\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.604397 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea7ba623-8dba-491e-b1aa-c55c918df93c-service-ca\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.604415 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5qnl\" (UniqueName: \"kubernetes.io/projected/ea7ba623-8dba-491e-b1aa-c55c918df93c-kube-api-access-g5qnl\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.605776 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7ba623-8dba-491e-b1aa-c55c918df93c-trusted-ca-bundle\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.605799 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea7ba623-8dba-491e-b1aa-c55c918df93c-oauth-serving-cert\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.606077 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea7ba623-8dba-491e-b1aa-c55c918df93c-console-config\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.606203 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea7ba623-8dba-491e-b1aa-c55c918df93c-service-ca\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.607685 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea7ba623-8dba-491e-b1aa-c55c918df93c-console-serving-cert\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.623757 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea7ba623-8dba-491e-b1aa-c55c918df93c-console-oauth-config\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.637039 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5qnl\" (UniqueName: \"kubernetes.io/projected/ea7ba623-8dba-491e-b1aa-c55c918df93c-kube-api-access-g5qnl\") pod \"console-5968f57749-wss5l\" (UID: \"ea7ba623-8dba-491e-b1aa-c55c918df93c\") " pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.679145 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.706219 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6127fe70-ba8b-4093-9146-7dce78995786-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.706279 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6127fe70-ba8b-4093-9146-7dce78995786-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.706310 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6127fe70-ba8b-4093-9146-7dce78995786-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.706368 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.706401 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6f8b49-15cf-404d-8bda-1ae7a7292d2b-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-mdzgh\" (UID: \"1d6f8b49-15cf-404d-8bda-1ae7a7292d2b\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.706417 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-config\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.706439 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjdfl\" (UniqueName: \"kubernetes.io/projected/6127fe70-ba8b-4093-9146-7dce78995786-kube-api-access-fjdfl\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.706698 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.706745 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.709323 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6f8b49-15cf-404d-8bda-1ae7a7292d2b-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-mdzgh\" (UID: \"1d6f8b49-15cf-404d-8bda-1ae7a7292d2b\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.808825 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.809472 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-config\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.809504 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjdfl\" (UniqueName: \"kubernetes.io/projected/6127fe70-ba8b-4093-9146-7dce78995786-kube-api-access-fjdfl\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.810587 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.810634 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.810747 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6127fe70-ba8b-4093-9146-7dce78995786-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.810818 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6127fe70-ba8b-4093-9146-7dce78995786-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.810941 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6127fe70-ba8b-4093-9146-7dce78995786-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.813789 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6127fe70-ba8b-4093-9146-7dce78995786-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.814332 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.814870 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6127fe70-ba8b-4093-9146-7dce78995786-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.815381 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6127fe70-ba8b-4093-9146-7dce78995786-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.815418 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-config\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.815933 4675 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.816021 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/98ccaa3eff25f930abf25e5d04600bac866339b49645deb89c687ceef7decd24/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.816690 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.831801 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjdfl\" (UniqueName: \"kubernetes.io/projected/6127fe70-ba8b-4093-9146-7dce78995786-kube-api-access-fjdfl\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.861928 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\") pod \"prometheus-metric-storage-0\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:57 crc kubenswrapper[4675]: I1121 13:55:57.908934 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh" Nov 21 13:55:58 crc kubenswrapper[4675]: I1121 13:55:58.163430 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:58 crc kubenswrapper[4675]: E1121 13:55:58.198432 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading blob sha256:d38baf2ed3e0924da55da06595c7e686717f3e4b5122bcbe70555da237e64929: fetching blob: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 21 13:55:58 crc kubenswrapper[4675]: E1121 13:55:58.198607 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zvwph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-pcqtd_openstack(7ed87882-a8fb-4fca-95f2-1b087b762b4e): ErrImagePull: reading blob sha256:d38baf2ed3e0924da55da06595c7e686717f3e4b5122bcbe70555da237e64929: fetching blob: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Nov 21 13:55:58 crc kubenswrapper[4675]: E1121 13:55:58.201007 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"reading blob sha256:d38baf2ed3e0924da55da06595c7e686717f3e4b5122bcbe70555da237e64929: fetching blob: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/dnsmasq-dns-675f4bcbfc-pcqtd" podUID="7ed87882-a8fb-4fca-95f2-1b087b762b4e" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.126011 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-l5r9b"] Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.127691 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.134507 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.134615 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-kfznf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.134514 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.141279 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-j7prf"] Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.143773 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.176682 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l5r9b"] Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.186860 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-j7prf"] Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.237356 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-scripts\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.237407 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-combined-ca-bundle\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.237464 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgvhg\" (UniqueName: \"kubernetes.io/projected/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-kube-api-access-cgvhg\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.237547 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-var-run\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.237572 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-etc-ovs\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.237594 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-scripts\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.237611 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvds7\" (UniqueName: \"kubernetes.io/projected/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-kube-api-access-tvds7\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.237649 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-var-lib\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.237690 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-var-run-ovn\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.237724 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-var-run\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.237934 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-ovn-controller-tls-certs\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.238138 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-var-log-ovn\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.238181 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-var-log\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.340162 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-scripts\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.340204 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-combined-ca-bundle\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.340247 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgvhg\" (UniqueName: \"kubernetes.io/projected/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-kube-api-access-cgvhg\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.340277 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-var-run\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.340316 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-etc-ovs\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.340334 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-scripts\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.340352 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvds7\" (UniqueName: \"kubernetes.io/projected/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-kube-api-access-tvds7\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.340384 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-var-lib\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.340420 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-var-run-ovn\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.340453 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-var-run\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.340511 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-ovn-controller-tls-certs\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.340559 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-var-log-ovn\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.340581 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-var-log\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.341153 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-var-log\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.341751 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-var-run\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.341889 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-etc-ovs\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.342368 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-var-run-ovn\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.342885 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-var-lib\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.343348 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-scripts\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.343470 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-var-run\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.343572 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-var-log-ovn\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.345135 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-scripts\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.358202 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-ovn-controller-tls-certs\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.360490 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-combined-ca-bundle\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.360931 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvds7\" (UniqueName: \"kubernetes.io/projected/4a15b97a-aa41-4d4d-8f75-0b3d2193eded-kube-api-access-tvds7\") pod \"ovn-controller-l5r9b\" (UID: \"4a15b97a-aa41-4d4d-8f75-0b3d2193eded\") " pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.362985 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgvhg\" (UniqueName: \"kubernetes.io/projected/90977e3d-e36b-4b13-b7f8-f98a6fdc56bc-kube-api-access-cgvhg\") pod \"ovn-controller-ovs-j7prf\" (UID: \"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc\") " pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.449141 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5r9b" Nov 21 13:55:59 crc kubenswrapper[4675]: I1121 13:55:59.475607 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:56:00 crc kubenswrapper[4675]: W1121 13:56:00.476476 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d25ef58_c63a_4689_9ca0_3955b0a3d1df.slice/crio-3dfd51c5f72df211062294f51f2fc6079112dbb8d1373b30fcec7bfe871117ac WatchSource:0}: Error finding container 3dfd51c5f72df211062294f51f2fc6079112dbb8d1373b30fcec7bfe871117ac: Status 404 returned error can't find the container with id 3dfd51c5f72df211062294f51f2fc6079112dbb8d1373b30fcec7bfe871117ac Nov 21 13:56:00 crc kubenswrapper[4675]: I1121 13:56:00.601999 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pcqtd" Nov 21 13:56:00 crc kubenswrapper[4675]: I1121 13:56:00.671241 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvwph\" (UniqueName: \"kubernetes.io/projected/7ed87882-a8fb-4fca-95f2-1b087b762b4e-kube-api-access-zvwph\") pod \"7ed87882-a8fb-4fca-95f2-1b087b762b4e\" (UID: \"7ed87882-a8fb-4fca-95f2-1b087b762b4e\") " Nov 21 13:56:00 crc kubenswrapper[4675]: I1121 13:56:00.671326 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed87882-a8fb-4fca-95f2-1b087b762b4e-config\") pod \"7ed87882-a8fb-4fca-95f2-1b087b762b4e\" (UID: \"7ed87882-a8fb-4fca-95f2-1b087b762b4e\") " Nov 21 13:56:00 crc kubenswrapper[4675]: I1121 13:56:00.671958 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ed87882-a8fb-4fca-95f2-1b087b762b4e-config" (OuterVolumeSpecName: "config") pod "7ed87882-a8fb-4fca-95f2-1b087b762b4e" (UID: "7ed87882-a8fb-4fca-95f2-1b087b762b4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:00 crc kubenswrapper[4675]: I1121 13:56:00.692358 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ed87882-a8fb-4fca-95f2-1b087b762b4e-kube-api-access-zvwph" (OuterVolumeSpecName: "kube-api-access-zvwph") pod "7ed87882-a8fb-4fca-95f2-1b087b762b4e" (UID: "7ed87882-a8fb-4fca-95f2-1b087b762b4e"). InnerVolumeSpecName "kube-api-access-zvwph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:00 crc kubenswrapper[4675]: I1121 13:56:00.773335 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvwph\" (UniqueName: \"kubernetes.io/projected/7ed87882-a8fb-4fca-95f2-1b087b762b4e-kube-api-access-zvwph\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:00 crc kubenswrapper[4675]: I1121 13:56:00.773369 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed87882-a8fb-4fca-95f2-1b087b762b4e-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:01 crc kubenswrapper[4675]: E1121 13:56:01.187604 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified: reading manifest current-podified in quay.io/podified-antelope-centos9/openstack-neutron-server: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 21 13:56:01 crc kubenswrapper[4675]: E1121 13:56:01.187776 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lq6hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-q4w85_openstack(88ddf172-a0a8-40ea-a447-36295e8cfb52): ErrImagePull: initializing source docker://quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified: reading manifest current-podified in quay.io/podified-antelope-centos9/openstack-neutron-server: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Nov 21 13:56:01 crc kubenswrapper[4675]: E1121 13:56:01.189565 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"initializing source docker://quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified: reading manifest current-podified in quay.io/podified-antelope-centos9/openstack-neutron-server: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" podUID="88ddf172-a0a8-40ea-a447-36295e8cfb52" Nov 21 13:56:01 crc kubenswrapper[4675]: I1121 13:56:01.197160 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pcqtd" event={"ID":"7ed87882-a8fb-4fca-95f2-1b087b762b4e","Type":"ContainerDied","Data":"522faa0d4325dfa595e4cc3cd7bd17f11e1c617fcf64e2e49568170fcef45f55"} Nov 21 13:56:01 crc kubenswrapper[4675]: I1121 13:56:01.197242 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pcqtd" Nov 21 13:56:01 crc kubenswrapper[4675]: I1121 13:56:01.201174 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d25ef58-c63a-4689-9ca0-3955b0a3d1df","Type":"ContainerStarted","Data":"3dfd51c5f72df211062294f51f2fc6079112dbb8d1373b30fcec7bfe871117ac"} Nov 21 13:56:01 crc kubenswrapper[4675]: I1121 13:56:01.248197 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pcqtd"] Nov 21 13:56:01 crc kubenswrapper[4675]: I1121 13:56:01.256324 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pcqtd"] Nov 21 13:56:02 crc kubenswrapper[4675]: E1121 13:56:02.211682 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" podUID="88ddf172-a0a8-40ea-a447-36295e8cfb52" Nov 21 13:56:02 crc kubenswrapper[4675]: I1121 13:56:02.869486 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ed87882-a8fb-4fca-95f2-1b087b762b4e" path="/var/lib/kubelet/pods/7ed87882-a8fb-4fca-95f2-1b087b762b4e/volumes" Nov 21 13:56:03 crc kubenswrapper[4675]: I1121 13:56:03.918385 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 13:56:03 crc kubenswrapper[4675]: I1121 13:56:03.920595 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:03 crc kubenswrapper[4675]: I1121 13:56:03.928481 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 21 13:56:03 crc kubenswrapper[4675]: I1121 13:56:03.928948 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 21 13:56:03 crc kubenswrapper[4675]: I1121 13:56:03.928952 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 21 13:56:03 crc kubenswrapper[4675]: I1121 13:56:03.929021 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 21 13:56:03 crc kubenswrapper[4675]: I1121 13:56:03.929099 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-cmrfk" Nov 21 13:56:03 crc kubenswrapper[4675]: I1121 13:56:03.939664 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.057891 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.057971 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.058021 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.058062 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.058123 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-config\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.058161 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.058183 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njfr5\" (UniqueName: \"kubernetes.io/projected/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-kube-api-access-njfr5\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.058206 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.123362 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.125730 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.128522 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-d85fw" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.131481 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.134263 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.134570 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.134793 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.159692 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.159778 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.159834 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.159889 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.159911 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-config\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.159952 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.159984 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njfr5\" (UniqueName: \"kubernetes.io/projected/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-kube-api-access-njfr5\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.160015 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.162136 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.163095 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.163305 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.164119 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-config\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.169619 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.171522 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.180417 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.191086 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njfr5\" (UniqueName: \"kubernetes.io/projected/b1a22076-aa43-4fe3-83ad-1a3e22d3abc7-kube-api-access-njfr5\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.201763 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.255192 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.261863 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.261900 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5c900d8-26df-4201-9693-318f45bb93d8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.261936 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9l5s\" (UniqueName: \"kubernetes.io/projected/b5c900d8-26df-4201-9693-318f45bb93d8-kube-api-access-w9l5s\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.262001 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c900d8-26df-4201-9693-318f45bb93d8-config\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.262028 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b5c900d8-26df-4201-9693-318f45bb93d8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.262061 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5c900d8-26df-4201-9693-318f45bb93d8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.262098 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5c900d8-26df-4201-9693-318f45bb93d8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.262127 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5c900d8-26df-4201-9693-318f45bb93d8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.364050 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9l5s\" (UniqueName: \"kubernetes.io/projected/b5c900d8-26df-4201-9693-318f45bb93d8-kube-api-access-w9l5s\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.364201 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c900d8-26df-4201-9693-318f45bb93d8-config\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.364251 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b5c900d8-26df-4201-9693-318f45bb93d8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.364298 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5c900d8-26df-4201-9693-318f45bb93d8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.364341 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5c900d8-26df-4201-9693-318f45bb93d8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.364363 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5c900d8-26df-4201-9693-318f45bb93d8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.364439 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.364468 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5c900d8-26df-4201-9693-318f45bb93d8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.365333 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c900d8-26df-4201-9693-318f45bb93d8-config\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.365668 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5c900d8-26df-4201-9693-318f45bb93d8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.365816 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.366276 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b5c900d8-26df-4201-9693-318f45bb93d8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.369374 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5c900d8-26df-4201-9693-318f45bb93d8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.370087 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5c900d8-26df-4201-9693-318f45bb93d8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.370704 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5c900d8-26df-4201-9693-318f45bb93d8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.383746 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9l5s\" (UniqueName: \"kubernetes.io/projected/b5c900d8-26df-4201-9693-318f45bb93d8-kube-api-access-w9l5s\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.395642 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b5c900d8-26df-4201-9693-318f45bb93d8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:04 crc kubenswrapper[4675]: I1121 13:56:04.448456 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:07 crc kubenswrapper[4675]: E1121 13:56:07.775324 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading blob sha256:d575bb74a591f2d723eb758ea51678520c984b799cb5591b330a41254792e05c: fetching blob: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 21 13:56:07 crc kubenswrapper[4675]: E1121 13:56:07.776727 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hj66l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-8q49k_openstack(56cb8af6-449e-48fd-aa91-bb358634ff4a): ErrImagePull: reading blob sha256:d575bb74a591f2d723eb758ea51678520c984b799cb5591b330a41254792e05c: fetching blob: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Nov 21 13:56:07 crc kubenswrapper[4675]: E1121 13:56:07.777916 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"reading blob sha256:d575bb74a591f2d723eb758ea51678520c984b799cb5591b330a41254792e05c: fetching blob: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" podUID="56cb8af6-449e-48fd-aa91-bb358634ff4a" Nov 21 13:56:08 crc kubenswrapper[4675]: I1121 13:56:08.157701 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 13:56:08 crc kubenswrapper[4675]: E1121 13:56:08.269904 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" podUID="56cb8af6-449e-48fd-aa91-bb358634ff4a" Nov 21 13:56:08 crc kubenswrapper[4675]: W1121 13:56:08.748350 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9adb63e_74d2_48f6_b639_4b22def78e35.slice/crio-f815636596764a22e3c122f289ab5c13ac24c1cb40c7988a0155ab335f4c9abe WatchSource:0}: Error finding container f815636596764a22e3c122f289ab5c13ac24c1cb40c7988a0155ab335f4c9abe: Status 404 returned error can't find the container with id f815636596764a22e3c122f289ab5c13ac24c1cb40c7988a0155ab335f4c9abe Nov 21 13:56:08 crc kubenswrapper[4675]: E1121 13:56:08.751809 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 21 13:56:08 crc kubenswrapper[4675]: E1121 13:56:08.751949 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zj9nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-q799p_openstack(568ef7bd-4d05-4b69-b17c-e1610ddff47b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:56:08 crc kubenswrapper[4675]: E1121 13:56:08.753751 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" podUID="568ef7bd-4d05-4b69-b17c-e1610ddff47b" Nov 21 13:56:08 crc kubenswrapper[4675]: I1121 13:56:08.889235 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-j7prf"] Nov 21 13:56:09 crc kubenswrapper[4675]: I1121 13:56:09.132800 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 13:56:09 crc kubenswrapper[4675]: I1121 13:56:09.277767 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c9adb63e-74d2-48f6-b639-4b22def78e35","Type":"ContainerStarted","Data":"f815636596764a22e3c122f289ab5c13ac24c1cb40c7988a0155ab335f4c9abe"} Nov 21 13:56:10 crc kubenswrapper[4675]: W1121 13:56:10.012458 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90977e3d_e36b_4b13_b7f8_f98a6fdc56bc.slice/crio-1762de73b065ab77f229ca61df047b3d6cddf6c8bbcdff34d0c3f76586c3b763 WatchSource:0}: Error finding container 1762de73b065ab77f229ca61df047b3d6cddf6c8bbcdff34d0c3f76586c3b763: Status 404 returned error can't find the container with id 1762de73b065ab77f229ca61df047b3d6cddf6c8bbcdff34d0c3f76586c3b763 Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.097988 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.195881 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj9nk\" (UniqueName: \"kubernetes.io/projected/568ef7bd-4d05-4b69-b17c-e1610ddff47b-kube-api-access-zj9nk\") pod \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\" (UID: \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\") " Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.196439 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568ef7bd-4d05-4b69-b17c-e1610ddff47b-dns-svc\") pod \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\" (UID: \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\") " Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.196516 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568ef7bd-4d05-4b69-b17c-e1610ddff47b-config\") pod \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\" (UID: \"568ef7bd-4d05-4b69-b17c-e1610ddff47b\") " Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.201093 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568ef7bd-4d05-4b69-b17c-e1610ddff47b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "568ef7bd-4d05-4b69-b17c-e1610ddff47b" (UID: "568ef7bd-4d05-4b69-b17c-e1610ddff47b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.201303 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568ef7bd-4d05-4b69-b17c-e1610ddff47b-config" (OuterVolumeSpecName: "config") pod "568ef7bd-4d05-4b69-b17c-e1610ddff47b" (UID: "568ef7bd-4d05-4b69-b17c-e1610ddff47b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.207765 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568ef7bd-4d05-4b69-b17c-e1610ddff47b-kube-api-access-zj9nk" (OuterVolumeSpecName: "kube-api-access-zj9nk") pod "568ef7bd-4d05-4b69-b17c-e1610ddff47b" (UID: "568ef7bd-4d05-4b69-b17c-e1610ddff47b"). InnerVolumeSpecName "kube-api-access-zj9nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.298886 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568ef7bd-4d05-4b69-b17c-e1610ddff47b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.298917 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568ef7bd-4d05-4b69-b17c-e1610ddff47b-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.298928 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj9nk\" (UniqueName: \"kubernetes.io/projected/568ef7bd-4d05-4b69-b17c-e1610ddff47b-kube-api-access-zj9nk\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.310925 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" event={"ID":"568ef7bd-4d05-4b69-b17c-e1610ddff47b","Type":"ContainerDied","Data":"b48c250d5924ffa197bc61a0febd13110a8fd2500b4ddab51661bfafb4b08b31"} Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.311029 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-q799p" Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.320624 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8b27bcc8-0305-4074-8d8d-9bb6e33cf000","Type":"ContainerStarted","Data":"dcc0e87a87b1c18b153c752d6e64baa98ac5649d3ba6e44ece4da46f91a220c7"} Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.322714 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-j7prf" event={"ID":"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc","Type":"ContainerStarted","Data":"1762de73b065ab77f229ca61df047b3d6cddf6c8bbcdff34d0c3f76586c3b763"} Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.412231 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-q799p"] Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.420527 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-q799p"] Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.820117 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.829634 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.837778 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.846391 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l5r9b"] Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.864674 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568ef7bd-4d05-4b69-b17c-e1610ddff47b" path="/var/lib/kubelet/pods/568ef7bd-4d05-4b69-b17c-e1610ddff47b/volumes" Nov 21 13:56:10 crc kubenswrapper[4675]: I1121 13:56:10.995937 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5968f57749-wss5l"] Nov 21 13:56:11 crc kubenswrapper[4675]: I1121 13:56:11.005291 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:56:11 crc kubenswrapper[4675]: W1121 13:56:11.044677 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ae83905_939b_4ae5_bab9_993356ce17b8.slice/crio-3a17287e1694fca342dc359885c7db80488b48d6e063c85885a1793acb4b2000 WatchSource:0}: Error finding container 3a17287e1694fca342dc359885c7db80488b48d6e063c85885a1793acb4b2000: Status 404 returned error can't find the container with id 3a17287e1694fca342dc359885c7db80488b48d6e063c85885a1793acb4b2000 Nov 21 13:56:11 crc kubenswrapper[4675]: W1121 13:56:11.047022 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a15b97a_aa41_4d4d_8f75_0b3d2193eded.slice/crio-4979ef7664bf6233dc2ddefb4d1a252f27f9df1abfc0930f98237de12f27248e WatchSource:0}: Error finding container 4979ef7664bf6233dc2ddefb4d1a252f27f9df1abfc0930f98237de12f27248e: Status 404 returned error can't find the container with id 4979ef7664bf6233dc2ddefb4d1a252f27f9df1abfc0930f98237de12f27248e Nov 21 13:56:11 crc kubenswrapper[4675]: W1121 13:56:11.052445 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea7ba623_8dba_491e_b1aa_c55c918df93c.slice/crio-5b338abad539446ab697172029d4ff190863079b58f45d32dfb187b1de9477cf WatchSource:0}: Error finding container 5b338abad539446ab697172029d4ff190863079b58f45d32dfb187b1de9477cf: Status 404 returned error can't find the container with id 5b338abad539446ab697172029d4ff190863079b58f45d32dfb187b1de9477cf Nov 21 13:56:11 crc kubenswrapper[4675]: W1121 13:56:11.057760 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6127fe70_ba8b_4093_9146_7dce78995786.slice/crio-2b70c8a0b5ef0762075e620c46356a70fa79449a584cbec7cce692184de7ab98 WatchSource:0}: Error finding container 2b70c8a0b5ef0762075e620c46356a70fa79449a584cbec7cce692184de7ab98: Status 404 returned error can't find the container with id 2b70c8a0b5ef0762075e620c46356a70fa79449a584cbec7cce692184de7ab98 Nov 21 13:56:11 crc kubenswrapper[4675]: W1121 13:56:11.059554 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22bdc76a_2740_432c_a43f_e0a57fdcb2c4.slice/crio-a326d3797d13f2190847bc801f31a23e443eef2b4bd24b2ccd28f271bdc8cf9a WatchSource:0}: Error finding container a326d3797d13f2190847bc801f31a23e443eef2b4bd24b2ccd28f271bdc8cf9a: Status 404 returned error can't find the container with id a326d3797d13f2190847bc801f31a23e443eef2b4bd24b2ccd28f271bdc8cf9a Nov 21 13:56:11 crc kubenswrapper[4675]: W1121 13:56:11.066621 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05bf6265_2f8a_4d78_9f5a_05304816937d.slice/crio-adb5d7e3aedcff22044f607924c65375a48c30717092beeb8e7cdde579ab7da3 WatchSource:0}: Error finding container adb5d7e3aedcff22044f607924c65375a48c30717092beeb8e7cdde579ab7da3: Status 404 returned error can't find the container with id adb5d7e3aedcff22044f607924c65375a48c30717092beeb8e7cdde579ab7da3 Nov 21 13:56:11 crc kubenswrapper[4675]: I1121 13:56:11.163490 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh"] Nov 21 13:56:11 crc kubenswrapper[4675]: I1121 13:56:11.233224 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 13:56:11 crc kubenswrapper[4675]: I1121 13:56:11.336949 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"05bf6265-2f8a-4d78-9f5a-05304816937d","Type":"ContainerStarted","Data":"adb5d7e3aedcff22044f607924c65375a48c30717092beeb8e7cdde579ab7da3"} Nov 21 13:56:11 crc kubenswrapper[4675]: I1121 13:56:11.338668 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8ae83905-939b-4ae5-bab9-993356ce17b8","Type":"ContainerStarted","Data":"3a17287e1694fca342dc359885c7db80488b48d6e063c85885a1793acb4b2000"} Nov 21 13:56:11 crc kubenswrapper[4675]: I1121 13:56:11.339841 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6127fe70-ba8b-4093-9146-7dce78995786","Type":"ContainerStarted","Data":"2b70c8a0b5ef0762075e620c46356a70fa79449a584cbec7cce692184de7ab98"} Nov 21 13:56:11 crc kubenswrapper[4675]: I1121 13:56:11.341264 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5968f57749-wss5l" event={"ID":"ea7ba623-8dba-491e-b1aa-c55c918df93c","Type":"ContainerStarted","Data":"5b338abad539446ab697172029d4ff190863079b58f45d32dfb187b1de9477cf"} Nov 21 13:56:11 crc kubenswrapper[4675]: I1121 13:56:11.342735 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"22bdc76a-2740-432c-a43f-e0a57fdcb2c4","Type":"ContainerStarted","Data":"a326d3797d13f2190847bc801f31a23e443eef2b4bd24b2ccd28f271bdc8cf9a"} Nov 21 13:56:11 crc kubenswrapper[4675]: I1121 13:56:11.346659 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l5r9b" event={"ID":"4a15b97a-aa41-4d4d-8f75-0b3d2193eded","Type":"ContainerStarted","Data":"4979ef7664bf6233dc2ddefb4d1a252f27f9df1abfc0930f98237de12f27248e"} Nov 21 13:56:11 crc kubenswrapper[4675]: I1121 13:56:11.348096 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh" event={"ID":"1d6f8b49-15cf-404d-8bda-1ae7a7292d2b","Type":"ContainerStarted","Data":"e323d26ca27c93aab289f953560903ae764bfab956868c20c8dc39bd7e5bea0f"} Nov 21 13:56:11 crc kubenswrapper[4675]: W1121 13:56:11.404973 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5c900d8_26df_4201_9693_318f45bb93d8.slice/crio-35f64a9fec58382cacae5a7c2973a60db237cbcb05e44987dc9c024b25999bdf WatchSource:0}: Error finding container 35f64a9fec58382cacae5a7c2973a60db237cbcb05e44987dc9c024b25999bdf: Status 404 returned error can't find the container with id 35f64a9fec58382cacae5a7c2973a60db237cbcb05e44987dc9c024b25999bdf Nov 21 13:56:12 crc kubenswrapper[4675]: I1121 13:56:12.244720 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 13:56:12 crc kubenswrapper[4675]: I1121 13:56:12.357305 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b5c900d8-26df-4201-9693-318f45bb93d8","Type":"ContainerStarted","Data":"35f64a9fec58382cacae5a7c2973a60db237cbcb05e44987dc9c024b25999bdf"} Nov 21 13:56:12 crc kubenswrapper[4675]: I1121 13:56:12.359296 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d25ef58-c63a-4689-9ca0-3955b0a3d1df","Type":"ContainerStarted","Data":"a22a83806ff53fda2e092623ac08ebbffd804b4823e84aa138ab79f19378f685"} Nov 21 13:56:12 crc kubenswrapper[4675]: I1121 13:56:12.361161 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5968f57749-wss5l" event={"ID":"ea7ba623-8dba-491e-b1aa-c55c918df93c","Type":"ContainerStarted","Data":"a7389a7991fcd895c974e36e13bb6f742f454af9fb04606f2c0b93a10bc3f213"} Nov 21 13:56:12 crc kubenswrapper[4675]: I1121 13:56:12.400415 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5968f57749-wss5l" podStartSLOduration=15.400378972 podStartE2EDuration="15.400378972s" podCreationTimestamp="2025-11-21 13:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:12.395189803 +0000 UTC m=+1449.121604560" watchObservedRunningTime="2025-11-21 13:56:12.400378972 +0000 UTC m=+1449.126793699" Nov 21 13:56:13 crc kubenswrapper[4675]: W1121 13:56:13.278918 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1a22076_aa43_4fe3_83ad_1a3e22d3abc7.slice/crio-f012a31dcba50bce8869957cd0d59f90a8650fbe268188161e4e8250d6b486a4 WatchSource:0}: Error finding container f012a31dcba50bce8869957cd0d59f90a8650fbe268188161e4e8250d6b486a4: Status 404 returned error can't find the container with id f012a31dcba50bce8869957cd0d59f90a8650fbe268188161e4e8250d6b486a4 Nov 21 13:56:13 crc kubenswrapper[4675]: I1121 13:56:13.370841 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7","Type":"ContainerStarted","Data":"f012a31dcba50bce8869957cd0d59f90a8650fbe268188161e4e8250d6b486a4"} Nov 21 13:56:17 crc kubenswrapper[4675]: I1121 13:56:17.680791 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:56:17 crc kubenswrapper[4675]: I1121 13:56:17.681369 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:56:17 crc kubenswrapper[4675]: I1121 13:56:17.690573 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:56:18 crc kubenswrapper[4675]: I1121 13:56:18.431683 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5968f57749-wss5l" Nov 21 13:56:18 crc kubenswrapper[4675]: I1121 13:56:18.505768 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-84b94b4484-zz6mx"] Nov 21 13:56:20 crc kubenswrapper[4675]: E1121 13:56:20.000886 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading blob sha256:9edb696970a5d944aa0a013096ed565e4092e9a751f383a80fdde57bd71155e3: fetching blob: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 21 13:56:20 crc kubenswrapper[4675]: E1121 13:56:20.001219 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dww4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(c9adb63e-74d2-48f6-b639-4b22def78e35): ErrImagePull: reading blob sha256:9edb696970a5d944aa0a013096ed565e4092e9a751f383a80fdde57bd71155e3: fetching blob: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Nov 21 13:56:20 crc kubenswrapper[4675]: E1121 13:56:20.002542 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"reading blob sha256:9edb696970a5d944aa0a013096ed565e4092e9a751f383a80fdde57bd71155e3: fetching blob: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/openstack-cell1-galera-0" podUID="c9adb63e-74d2-48f6-b639-4b22def78e35" Nov 21 13:56:24 crc kubenswrapper[4675]: E1121 13:56:24.106334 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="c9adb63e-74d2-48f6-b639-4b22def78e35" Nov 21 13:56:25 crc kubenswrapper[4675]: E1121 13:56:25.001819 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Nov 21 13:56:25 crc kubenswrapper[4675]: E1121 13:56:25.002340 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nf9h9bh65bh574h5b7h658h68bh657h68fh649hc5hd5h5d4h5cbh5b9h56ch5f9h588h7ch695h668h55fh648h5f8h57bh556h678h85h574hcch97h56fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwzp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(8ae83905-939b-4ae5-bab9-993356ce17b8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:56:25 crc kubenswrapper[4675]: E1121 13:56:25.003708 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="8ae83905-939b-4ae5-bab9-993356ce17b8" Nov 21 13:56:25 crc kubenswrapper[4675]: E1121 13:56:25.680994 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="8ae83905-939b-4ae5-bab9-993356ce17b8" Nov 21 13:56:27 crc kubenswrapper[4675]: I1121 13:56:27.512734 4675 generic.go:334] "Generic (PLEG): container finished" podID="90977e3d-e36b-4b13-b7f8-f98a6fdc56bc" containerID="f7588da9a132205c775fb80d9b03d37b31259ba81c269c6e42fce7cd8860de24" exitCode=0 Nov 21 13:56:27 crc kubenswrapper[4675]: I1121 13:56:27.512806 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-j7prf" event={"ID":"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc","Type":"ContainerDied","Data":"f7588da9a132205c775fb80d9b03d37b31259ba81c269c6e42fce7cd8860de24"} Nov 21 13:56:27 crc kubenswrapper[4675]: I1121 13:56:27.514523 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"05bf6265-2f8a-4d78-9f5a-05304816937d","Type":"ContainerStarted","Data":"810c83c2d3672ad5ba1e9162febd0ece23ac55c0f199b558fd7b0dd635b75a6e"} Nov 21 13:56:27 crc kubenswrapper[4675]: I1121 13:56:27.516638 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b5c900d8-26df-4201-9693-318f45bb93d8","Type":"ContainerStarted","Data":"e7ec689ffda784ca3e9aa08542f166c603570f42fcc74003e49dc7e14d2b1518"} Nov 21 13:56:27 crc kubenswrapper[4675]: I1121 13:56:27.518313 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l5r9b" event={"ID":"4a15b97a-aa41-4d4d-8f75-0b3d2193eded","Type":"ContainerStarted","Data":"3a84ef62794e3524346ad06a01c6a8d538e9cdf7f45d460a3fa44d26cb384b12"} Nov 21 13:56:27 crc kubenswrapper[4675]: I1121 13:56:27.518659 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-l5r9b" Nov 21 13:56:27 crc kubenswrapper[4675]: I1121 13:56:27.524002 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh" event={"ID":"1d6f8b49-15cf-404d-8bda-1ae7a7292d2b","Type":"ContainerStarted","Data":"1517fae9784f14af4f1c51e35c63e509e6620521402101eb647a2f8327be0ad9"} Nov 21 13:56:27 crc kubenswrapper[4675]: I1121 13:56:27.526377 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7","Type":"ContainerStarted","Data":"63f1f65b5ff46afad757306fd8a11873f3c8133978967775050a618b48803a9e"} Nov 21 13:56:27 crc kubenswrapper[4675]: I1121 13:56:27.527856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8b27bcc8-0305-4074-8d8d-9bb6e33cf000","Type":"ContainerStarted","Data":"f9191200bca3962b6b080fb2ae51422624b33d0f9420993d332fbf85f37af4b1"} Nov 21 13:56:27 crc kubenswrapper[4675]: I1121 13:56:27.528086 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 21 13:56:27 crc kubenswrapper[4675]: I1121 13:56:27.552410 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-mdzgh" podStartSLOduration=16.354776529 podStartE2EDuration="31.552395371s" podCreationTimestamp="2025-11-21 13:55:56 +0000 UTC" firstStartedPulling="2025-11-21 13:56:11.311563958 +0000 UTC m=+1448.037978685" lastFinishedPulling="2025-11-21 13:56:26.5091828 +0000 UTC m=+1463.235597527" observedRunningTime="2025-11-21 13:56:27.552018452 +0000 UTC m=+1464.278433189" watchObservedRunningTime="2025-11-21 13:56:27.552395371 +0000 UTC m=+1464.278810088" Nov 21 13:56:27 crc kubenswrapper[4675]: I1121 13:56:27.613555 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.097314168 podStartE2EDuration="31.61353432s" podCreationTimestamp="2025-11-21 13:55:56 +0000 UTC" firstStartedPulling="2025-11-21 13:56:09.99868123 +0000 UTC m=+1446.725095967" lastFinishedPulling="2025-11-21 13:56:26.514901392 +0000 UTC m=+1463.241316119" observedRunningTime="2025-11-21 13:56:27.608094194 +0000 UTC m=+1464.334508921" watchObservedRunningTime="2025-11-21 13:56:27.61353432 +0000 UTC m=+1464.339949067" Nov 21 13:56:27 crc kubenswrapper[4675]: I1121 13:56:27.634294 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-l5r9b" podStartSLOduration=13.174885401 podStartE2EDuration="28.634274615s" podCreationTimestamp="2025-11-21 13:55:59 +0000 UTC" firstStartedPulling="2025-11-21 13:56:11.049302204 +0000 UTC m=+1447.775716931" lastFinishedPulling="2025-11-21 13:56:26.508691418 +0000 UTC m=+1463.235106145" observedRunningTime="2025-11-21 13:56:27.627353473 +0000 UTC m=+1464.353768190" watchObservedRunningTime="2025-11-21 13:56:27.634274615 +0000 UTC m=+1464.360689332" Nov 21 13:56:29 crc kubenswrapper[4675]: I1121 13:56:29.560234 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-j7prf" event={"ID":"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc","Type":"ContainerStarted","Data":"92ba5f3f7316e4687b6cc350c3c96b59a8dc4255e43dc7a59e4e1c2a6ec4fde3"} Nov 21 13:56:29 crc kubenswrapper[4675]: I1121 13:56:29.560779 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-j7prf" event={"ID":"90977e3d-e36b-4b13-b7f8-f98a6fdc56bc","Type":"ContainerStarted","Data":"ea64416a2037896efafe008ded9b2934ea11f310324b7654828ac7b30b94b3ee"} Nov 21 13:56:29 crc kubenswrapper[4675]: I1121 13:56:29.560796 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:56:29 crc kubenswrapper[4675]: I1121 13:56:29.560805 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:56:29 crc kubenswrapper[4675]: I1121 13:56:29.563184 4675 generic.go:334] "Generic (PLEG): container finished" podID="56cb8af6-449e-48fd-aa91-bb358634ff4a" containerID="aeddce8a8bec368d6c12559dcaf8c93d8c16445e0652d421b9ec3a09170b2c20" exitCode=0 Nov 21 13:56:29 crc kubenswrapper[4675]: I1121 13:56:29.563247 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" event={"ID":"56cb8af6-449e-48fd-aa91-bb358634ff4a","Type":"ContainerDied","Data":"aeddce8a8bec368d6c12559dcaf8c93d8c16445e0652d421b9ec3a09170b2c20"} Nov 21 13:56:29 crc kubenswrapper[4675]: I1121 13:56:29.565304 4675 generic.go:334] "Generic (PLEG): container finished" podID="88ddf172-a0a8-40ea-a447-36295e8cfb52" containerID="2e2c187d19a556a9d2579f91afb9a9c93057a62a77ef790aafb13b1faf1dc6ad" exitCode=0 Nov 21 13:56:29 crc kubenswrapper[4675]: I1121 13:56:29.565412 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" event={"ID":"88ddf172-a0a8-40ea-a447-36295e8cfb52","Type":"ContainerDied","Data":"2e2c187d19a556a9d2579f91afb9a9c93057a62a77ef790aafb13b1faf1dc6ad"} Nov 21 13:56:29 crc kubenswrapper[4675]: I1121 13:56:29.566873 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"22bdc76a-2740-432c-a43f-e0a57fdcb2c4","Type":"ContainerStarted","Data":"1d6d46a106cd3fc5f9be1fd55be2418cdb5bfcfe23f9faec67f0aa8d972ea46d"} Nov 21 13:56:29 crc kubenswrapper[4675]: I1121 13:56:29.586412 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-j7prf" podStartSLOduration=25.010791926 podStartE2EDuration="30.58638852s" podCreationTimestamp="2025-11-21 13:55:59 +0000 UTC" firstStartedPulling="2025-11-21 13:56:10.088860189 +0000 UTC m=+1446.815274916" lastFinishedPulling="2025-11-21 13:56:15.664456783 +0000 UTC m=+1452.390871510" observedRunningTime="2025-11-21 13:56:29.576504264 +0000 UTC m=+1466.302918991" watchObservedRunningTime="2025-11-21 13:56:29.58638852 +0000 UTC m=+1466.312803247" Nov 21 13:56:30 crc kubenswrapper[4675]: I1121 13:56:30.579787 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" event={"ID":"56cb8af6-449e-48fd-aa91-bb358634ff4a","Type":"ContainerStarted","Data":"4ee62e7a8e017728171402101d21cb8b3e81ab600a6b19f54920bfaa5c892e55"} Nov 21 13:56:30 crc kubenswrapper[4675]: I1121 13:56:30.580406 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:56:30 crc kubenswrapper[4675]: I1121 13:56:30.584123 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" event={"ID":"88ddf172-a0a8-40ea-a447-36295e8cfb52","Type":"ContainerStarted","Data":"6bb78f2ca945364ac3fd95e620ed0a3677f6e7dad9b28398cf81bc017358e2d4"} Nov 21 13:56:30 crc kubenswrapper[4675]: I1121 13:56:30.584281 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:56:30 crc kubenswrapper[4675]: I1121 13:56:30.586164 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6127fe70-ba8b-4093-9146-7dce78995786","Type":"ContainerStarted","Data":"2e5c8eedbbb5e3082bba8f889ac993791b2defde5035cdbba6331c5ad535a240"} Nov 21 13:56:30 crc kubenswrapper[4675]: I1121 13:56:30.600162 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" podStartSLOduration=5.005377117 podStartE2EDuration="41.600143749s" podCreationTimestamp="2025-11-21 13:55:49 +0000 UTC" firstStartedPulling="2025-11-21 13:55:50.806144737 +0000 UTC m=+1427.532559464" lastFinishedPulling="2025-11-21 13:56:27.400911369 +0000 UTC m=+1464.127326096" observedRunningTime="2025-11-21 13:56:30.597726799 +0000 UTC m=+1467.324141526" watchObservedRunningTime="2025-11-21 13:56:30.600143749 +0000 UTC m=+1467.326558496" Nov 21 13:56:30 crc kubenswrapper[4675]: I1121 13:56:30.619196 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" podStartSLOduration=5.058021655 podStartE2EDuration="41.619178602s" podCreationTimestamp="2025-11-21 13:55:49 +0000 UTC" firstStartedPulling="2025-11-21 13:55:50.91860142 +0000 UTC m=+1427.645016147" lastFinishedPulling="2025-11-21 13:56:27.479758367 +0000 UTC m=+1464.206173094" observedRunningTime="2025-11-21 13:56:30.613708526 +0000 UTC m=+1467.340123273" watchObservedRunningTime="2025-11-21 13:56:30.619178602 +0000 UTC m=+1467.345593329" Nov 21 13:56:32 crc kubenswrapper[4675]: I1121 13:56:32.622100 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b5c900d8-26df-4201-9693-318f45bb93d8","Type":"ContainerStarted","Data":"3edd8a1ec13deb174a59dc26b1841c9ece77b75183a7040f5a3021f863018269"} Nov 21 13:56:32 crc kubenswrapper[4675]: I1121 13:56:32.624957 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b1a22076-aa43-4fe3-83ad-1a3e22d3abc7","Type":"ContainerStarted","Data":"546403e22f1dbb4981f7ea2681088c80cb02abdf5f5971b293d0723a30ff58ea"} Nov 21 13:56:32 crc kubenswrapper[4675]: I1121 13:56:32.626600 4675 generic.go:334] "Generic (PLEG): container finished" podID="05bf6265-2f8a-4d78-9f5a-05304816937d" containerID="810c83c2d3672ad5ba1e9162febd0ece23ac55c0f199b558fd7b0dd635b75a6e" exitCode=0 Nov 21 13:56:32 crc kubenswrapper[4675]: I1121 13:56:32.626644 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"05bf6265-2f8a-4d78-9f5a-05304816937d","Type":"ContainerDied","Data":"810c83c2d3672ad5ba1e9162febd0ece23ac55c0f199b558fd7b0dd635b75a6e"} Nov 21 13:56:32 crc kubenswrapper[4675]: I1121 13:56:32.655178 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.647068468 podStartE2EDuration="29.65516127s" podCreationTimestamp="2025-11-21 13:56:03 +0000 UTC" firstStartedPulling="2025-11-21 13:56:11.411221324 +0000 UTC m=+1448.137636051" lastFinishedPulling="2025-11-21 13:56:31.419314116 +0000 UTC m=+1468.145728853" observedRunningTime="2025-11-21 13:56:32.647644963 +0000 UTC m=+1469.374059710" watchObservedRunningTime="2025-11-21 13:56:32.65516127 +0000 UTC m=+1469.381575987" Nov 21 13:56:32 crc kubenswrapper[4675]: I1121 13:56:32.686985 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.576808018 podStartE2EDuration="30.68696312s" podCreationTimestamp="2025-11-21 13:56:02 +0000 UTC" firstStartedPulling="2025-11-21 13:56:13.284958953 +0000 UTC m=+1450.011373690" lastFinishedPulling="2025-11-21 13:56:31.395114065 +0000 UTC m=+1468.121528792" observedRunningTime="2025-11-21 13:56:32.686825846 +0000 UTC m=+1469.413240593" watchObservedRunningTime="2025-11-21 13:56:32.68696312 +0000 UTC m=+1469.413377847" Nov 21 13:56:33 crc kubenswrapper[4675]: I1121 13:56:33.639559 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"05bf6265-2f8a-4d78-9f5a-05304816937d","Type":"ContainerStarted","Data":"42ff26053314fb32647f1c30138207c066ceb1bf5073b2318be3f3a2f670ad21"} Nov 21 13:56:33 crc kubenswrapper[4675]: I1121 13:56:33.669647 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=27.231092262 podStartE2EDuration="42.669628847s" podCreationTimestamp="2025-11-21 13:55:51 +0000 UTC" firstStartedPulling="2025-11-21 13:56:11.069169308 +0000 UTC m=+1447.795584035" lastFinishedPulling="2025-11-21 13:56:26.507705873 +0000 UTC m=+1463.234120620" observedRunningTime="2025-11-21 13:56:33.662963161 +0000 UTC m=+1470.389377958" watchObservedRunningTime="2025-11-21 13:56:33.669628847 +0000 UTC m=+1470.396043574" Nov 21 13:56:34 crc kubenswrapper[4675]: I1121 13:56:34.256157 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:34 crc kubenswrapper[4675]: I1121 13:56:34.257249 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:34 crc kubenswrapper[4675]: I1121 13:56:34.297376 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:34 crc kubenswrapper[4675]: I1121 13:56:34.449306 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:34 crc kubenswrapper[4675]: I1121 13:56:34.449379 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:34 crc kubenswrapper[4675]: I1121 13:56:34.504365 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:34 crc kubenswrapper[4675]: I1121 13:56:34.689224 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 21 13:56:34 crc kubenswrapper[4675]: I1121 13:56:34.692480 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 21 13:56:34 crc kubenswrapper[4675]: I1121 13:56:34.957220 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:56:34 crc kubenswrapper[4675]: I1121 13:56:34.975921 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q4w85"] Nov 21 13:56:34 crc kubenswrapper[4675]: I1121 13:56:34.976155 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" podUID="88ddf172-a0a8-40ea-a447-36295e8cfb52" containerName="dnsmasq-dns" containerID="cri-o://6bb78f2ca945364ac3fd95e620ed0a3677f6e7dad9b28398cf81bc017358e2d4" gracePeriod=10 Nov 21 13:56:34 crc kubenswrapper[4675]: I1121 13:56:34.979368 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.037922 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-dq2mh"] Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.040802 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.062100 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.088340 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xc9np"] Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.092236 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.096379 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.115942 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-dq2mh"] Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.118157 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c1c50a5b-1bd7-4c2a-9424-770e8170212e-ovs-rundir\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.118203 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-dq2mh\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.118264 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-dq2mh\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.118325 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c1c50a5b-1bd7-4c2a-9424-770e8170212e-ovn-rundir\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.118455 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c50a5b-1bd7-4c2a-9424-770e8170212e-combined-ca-bundle\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.118491 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1c50a5b-1bd7-4c2a-9424-770e8170212e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.118523 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-config\") pod \"dnsmasq-dns-5bf47b49b7-dq2mh\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.118776 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjhxg\" (UniqueName: \"kubernetes.io/projected/65890acc-153d-404d-b513-f5d0e00c43d1-kube-api-access-wjhxg\") pod \"dnsmasq-dns-5bf47b49b7-dq2mh\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.118940 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1c50a5b-1bd7-4c2a-9424-770e8170212e-config\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.118976 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wvpl\" (UniqueName: \"kubernetes.io/projected/c1c50a5b-1bd7-4c2a-9424-770e8170212e-kube-api-access-8wvpl\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.167480 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xc9np"] Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.223366 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c1c50a5b-1bd7-4c2a-9424-770e8170212e-ovn-rundir\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.223409 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c50a5b-1bd7-4c2a-9424-770e8170212e-combined-ca-bundle\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.223433 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1c50a5b-1bd7-4c2a-9424-770e8170212e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.223460 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-config\") pod \"dnsmasq-dns-5bf47b49b7-dq2mh\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.223513 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjhxg\" (UniqueName: \"kubernetes.io/projected/65890acc-153d-404d-b513-f5d0e00c43d1-kube-api-access-wjhxg\") pod \"dnsmasq-dns-5bf47b49b7-dq2mh\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.223550 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1c50a5b-1bd7-4c2a-9424-770e8170212e-config\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.223571 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wvpl\" (UniqueName: \"kubernetes.io/projected/c1c50a5b-1bd7-4c2a-9424-770e8170212e-kube-api-access-8wvpl\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.223623 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c1c50a5b-1bd7-4c2a-9424-770e8170212e-ovs-rundir\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.223651 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-dq2mh\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.223681 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-dq2mh\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.224568 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-dq2mh\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.224804 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c1c50a5b-1bd7-4c2a-9424-770e8170212e-ovn-rundir\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.229482 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-config\") pod \"dnsmasq-dns-5bf47b49b7-dq2mh\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.229587 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c1c50a5b-1bd7-4c2a-9424-770e8170212e-ovs-rundir\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.230132 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1c50a5b-1bd7-4c2a-9424-770e8170212e-config\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.230201 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-dq2mh\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.245851 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c50a5b-1bd7-4c2a-9424-770e8170212e-combined-ca-bundle\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.258287 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wvpl\" (UniqueName: \"kubernetes.io/projected/c1c50a5b-1bd7-4c2a-9424-770e8170212e-kube-api-access-8wvpl\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.259467 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1c50a5b-1bd7-4c2a-9424-770e8170212e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xc9np\" (UID: \"c1c50a5b-1bd7-4c2a-9424-770e8170212e\") " pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.262183 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjhxg\" (UniqueName: \"kubernetes.io/projected/65890acc-153d-404d-b513-f5d0e00c43d1-kube-api-access-wjhxg\") pod \"dnsmasq-dns-5bf47b49b7-dq2mh\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.270358 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-dq2mh"] Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.271369 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.304341 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-trdrw"] Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.311738 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.312274 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-trdrw"] Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.318817 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.319914 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.321643 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.328701 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.329420 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.329453 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2z6j\" (UniqueName: \"kubernetes.io/projected/95bb8692-22aa-4552-a633-86ccf0d7bd16-kube-api-access-n2z6j\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.329474 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.329527 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-dns-svc\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.329571 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-config\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.336872 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.337146 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.337354 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-cjjc9" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.337463 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.359241 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" podUID="88ddf172-a0a8-40ea-a447-36295e8cfb52" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.431593 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.431798 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2z6j\" (UniqueName: \"kubernetes.io/projected/95bb8692-22aa-4552-a633-86ccf0d7bd16-kube-api-access-n2z6j\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.431899 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5d93705-ae99-48ab-99e3-1e225f06ab6e-scripts\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.432009 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.432157 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d93705-ae99-48ab-99e3-1e225f06ab6e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.432270 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e5d93705-ae99-48ab-99e3-1e225f06ab6e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.432365 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-dns-svc\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.432834 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d93705-ae99-48ab-99e3-1e225f06ab6e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.432946 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d93705-ae99-48ab-99e3-1e225f06ab6e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.433034 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-config\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.433170 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d93705-ae99-48ab-99e3-1e225f06ab6e-config\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.433260 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb2dq\" (UniqueName: \"kubernetes.io/projected/e5d93705-ae99-48ab-99e3-1e225f06ab6e-kube-api-access-lb2dq\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.434431 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.435404 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.437648 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-dns-svc\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.438791 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-config\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.481883 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2z6j\" (UniqueName: \"kubernetes.io/projected/95bb8692-22aa-4552-a633-86ccf0d7bd16-kube-api-access-n2z6j\") pod \"dnsmasq-dns-8554648995-trdrw\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.503532 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xc9np" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.539861 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5d93705-ae99-48ab-99e3-1e225f06ab6e-scripts\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.540039 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d93705-ae99-48ab-99e3-1e225f06ab6e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.540098 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e5d93705-ae99-48ab-99e3-1e225f06ab6e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.540184 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d93705-ae99-48ab-99e3-1e225f06ab6e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.540235 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d93705-ae99-48ab-99e3-1e225f06ab6e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.540380 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d93705-ae99-48ab-99e3-1e225f06ab6e-config\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.540430 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb2dq\" (UniqueName: \"kubernetes.io/projected/e5d93705-ae99-48ab-99e3-1e225f06ab6e-kube-api-access-lb2dq\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.540712 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5d93705-ae99-48ab-99e3-1e225f06ab6e-scripts\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.541364 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e5d93705-ae99-48ab-99e3-1e225f06ab6e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.541899 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d93705-ae99-48ab-99e3-1e225f06ab6e-config\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.545229 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d93705-ae99-48ab-99e3-1e225f06ab6e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.545470 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d93705-ae99-48ab-99e3-1e225f06ab6e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.548474 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5d93705-ae99-48ab-99e3-1e225f06ab6e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.565573 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb2dq\" (UniqueName: \"kubernetes.io/projected/e5d93705-ae99-48ab-99e3-1e225f06ab6e-kube-api-access-lb2dq\") pod \"ovn-northd-0\" (UID: \"e5d93705-ae99-48ab-99e3-1e225f06ab6e\") " pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.606773 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.641688 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88ddf172-a0a8-40ea-a447-36295e8cfb52-dns-svc\") pod \"88ddf172-a0a8-40ea-a447-36295e8cfb52\" (UID: \"88ddf172-a0a8-40ea-a447-36295e8cfb52\") " Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.641769 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq6hc\" (UniqueName: \"kubernetes.io/projected/88ddf172-a0a8-40ea-a447-36295e8cfb52-kube-api-access-lq6hc\") pod \"88ddf172-a0a8-40ea-a447-36295e8cfb52\" (UID: \"88ddf172-a0a8-40ea-a447-36295e8cfb52\") " Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.641831 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ddf172-a0a8-40ea-a447-36295e8cfb52-config\") pod \"88ddf172-a0a8-40ea-a447-36295e8cfb52\" (UID: \"88ddf172-a0a8-40ea-a447-36295e8cfb52\") " Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.645284 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88ddf172-a0a8-40ea-a447-36295e8cfb52-kube-api-access-lq6hc" (OuterVolumeSpecName: "kube-api-access-lq6hc") pod "88ddf172-a0a8-40ea-a447-36295e8cfb52" (UID: "88ddf172-a0a8-40ea-a447-36295e8cfb52"). InnerVolumeSpecName "kube-api-access-lq6hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.685112 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.703377 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88ddf172-a0a8-40ea-a447-36295e8cfb52-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88ddf172-a0a8-40ea-a447-36295e8cfb52" (UID: "88ddf172-a0a8-40ea-a447-36295e8cfb52"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.709315 4675 generic.go:334] "Generic (PLEG): container finished" podID="88ddf172-a0a8-40ea-a447-36295e8cfb52" containerID="6bb78f2ca945364ac3fd95e620ed0a3677f6e7dad9b28398cf81bc017358e2d4" exitCode=0 Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.709606 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" event={"ID":"88ddf172-a0a8-40ea-a447-36295e8cfb52","Type":"ContainerDied","Data":"6bb78f2ca945364ac3fd95e620ed0a3677f6e7dad9b28398cf81bc017358e2d4"} Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.709666 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" event={"ID":"88ddf172-a0a8-40ea-a447-36295e8cfb52","Type":"ContainerDied","Data":"dae11d333459279551f14aa1ae5fff16f42b8fb2df3b31c50b7bd98534542cca"} Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.709679 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88ddf172-a0a8-40ea-a447-36295e8cfb52-config" (OuterVolumeSpecName: "config") pod "88ddf172-a0a8-40ea-a447-36295e8cfb52" (UID: "88ddf172-a0a8-40ea-a447-36295e8cfb52"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.709691 4675 scope.go:117] "RemoveContainer" containerID="6bb78f2ca945364ac3fd95e620ed0a3677f6e7dad9b28398cf81bc017358e2d4" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.709910 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-q4w85" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.726386 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.746790 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88ddf172-a0a8-40ea-a447-36295e8cfb52-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.746830 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq6hc\" (UniqueName: \"kubernetes.io/projected/88ddf172-a0a8-40ea-a447-36295e8cfb52-kube-api-access-lq6hc\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.746843 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ddf172-a0a8-40ea-a447-36295e8cfb52-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.756598 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q4w85"] Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.761598 4675 scope.go:117] "RemoveContainer" containerID="2e2c187d19a556a9d2579f91afb9a9c93057a62a77ef790aafb13b1faf1dc6ad" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.766819 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q4w85"] Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.816635 4675 scope.go:117] "RemoveContainer" containerID="6bb78f2ca945364ac3fd95e620ed0a3677f6e7dad9b28398cf81bc017358e2d4" Nov 21 13:56:35 crc kubenswrapper[4675]: E1121 13:56:35.817113 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bb78f2ca945364ac3fd95e620ed0a3677f6e7dad9b28398cf81bc017358e2d4\": container with ID starting with 6bb78f2ca945364ac3fd95e620ed0a3677f6e7dad9b28398cf81bc017358e2d4 not found: ID does not exist" containerID="6bb78f2ca945364ac3fd95e620ed0a3677f6e7dad9b28398cf81bc017358e2d4" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.817143 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bb78f2ca945364ac3fd95e620ed0a3677f6e7dad9b28398cf81bc017358e2d4"} err="failed to get container status \"6bb78f2ca945364ac3fd95e620ed0a3677f6e7dad9b28398cf81bc017358e2d4\": rpc error: code = NotFound desc = could not find container \"6bb78f2ca945364ac3fd95e620ed0a3677f6e7dad9b28398cf81bc017358e2d4\": container with ID starting with 6bb78f2ca945364ac3fd95e620ed0a3677f6e7dad9b28398cf81bc017358e2d4 not found: ID does not exist" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.817163 4675 scope.go:117] "RemoveContainer" containerID="2e2c187d19a556a9d2579f91afb9a9c93057a62a77ef790aafb13b1faf1dc6ad" Nov 21 13:56:35 crc kubenswrapper[4675]: E1121 13:56:35.817570 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e2c187d19a556a9d2579f91afb9a9c93057a62a77ef790aafb13b1faf1dc6ad\": container with ID starting with 2e2c187d19a556a9d2579f91afb9a9c93057a62a77ef790aafb13b1faf1dc6ad not found: ID does not exist" containerID="2e2c187d19a556a9d2579f91afb9a9c93057a62a77ef790aafb13b1faf1dc6ad" Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.817672 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e2c187d19a556a9d2579f91afb9a9c93057a62a77ef790aafb13b1faf1dc6ad"} err="failed to get container status \"2e2c187d19a556a9d2579f91afb9a9c93057a62a77ef790aafb13b1faf1dc6ad\": rpc error: code = NotFound desc = could not find container \"2e2c187d19a556a9d2579f91afb9a9c93057a62a77ef790aafb13b1faf1dc6ad\": container with ID starting with 2e2c187d19a556a9d2579f91afb9a9c93057a62a77ef790aafb13b1faf1dc6ad not found: ID does not exist" Nov 21 13:56:35 crc kubenswrapper[4675]: W1121 13:56:35.875918 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65890acc_153d_404d_b513_f5d0e00c43d1.slice/crio-eb8d36f740127b2b058b8d6e5262bf83a914b0fd21a10a4ec8a3d9660f40d556 WatchSource:0}: Error finding container eb8d36f740127b2b058b8d6e5262bf83a914b0fd21a10a4ec8a3d9660f40d556: Status 404 returned error can't find the container with id eb8d36f740127b2b058b8d6e5262bf83a914b0fd21a10a4ec8a3d9660f40d556 Nov 21 13:56:35 crc kubenswrapper[4675]: I1121 13:56:35.880268 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-dq2mh"] Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.031695 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xc9np"] Nov 21 13:56:36 crc kubenswrapper[4675]: W1121 13:56:36.037007 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1c50a5b_1bd7_4c2a_9424_770e8170212e.slice/crio-d6b3c23acd0e724224b4a12cd8726f76d1b03d5e0603400aa3d625958fd8082c WatchSource:0}: Error finding container d6b3c23acd0e724224b4a12cd8726f76d1b03d5e0603400aa3d625958fd8082c: Status 404 returned error can't find the container with id d6b3c23acd0e724224b4a12cd8726f76d1b03d5e0603400aa3d625958fd8082c Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.188653 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-trdrw"] Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.313241 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.660509 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.722350 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e5d93705-ae99-48ab-99e3-1e225f06ab6e","Type":"ContainerStarted","Data":"03d3d29ba29ca717eee86dbbc9d840d277678bb2b15912c6e0fc34ac45fe1ed5"} Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.729405 4675 generic.go:334] "Generic (PLEG): container finished" podID="95bb8692-22aa-4552-a633-86ccf0d7bd16" containerID="1c10f161a26613b83d7e5f9676b4b0cb9367c97ebc7e36d68f13b4f3023e3bbf" exitCode=0 Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.729492 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-trdrw" event={"ID":"95bb8692-22aa-4552-a633-86ccf0d7bd16","Type":"ContainerDied","Data":"1c10f161a26613b83d7e5f9676b4b0cb9367c97ebc7e36d68f13b4f3023e3bbf"} Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.730334 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-trdrw" event={"ID":"95bb8692-22aa-4552-a633-86ccf0d7bd16","Type":"ContainerStarted","Data":"9c02c7d0505b99e472ecf78401eadf3e35a9f50f1f39af6105f75732fab6244a"} Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.736939 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xc9np" event={"ID":"c1c50a5b-1bd7-4c2a-9424-770e8170212e","Type":"ContainerStarted","Data":"1995f2f524b3150fc8b6feff2f80e960f1d1f4b8186f9c1b07f3260b1c85a081"} Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.736980 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xc9np" event={"ID":"c1c50a5b-1bd7-4c2a-9424-770e8170212e","Type":"ContainerStarted","Data":"d6b3c23acd0e724224b4a12cd8726f76d1b03d5e0603400aa3d625958fd8082c"} Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.738919 4675 generic.go:334] "Generic (PLEG): container finished" podID="6127fe70-ba8b-4093-9146-7dce78995786" containerID="2e5c8eedbbb5e3082bba8f889ac993791b2defde5035cdbba6331c5ad535a240" exitCode=0 Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.739037 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6127fe70-ba8b-4093-9146-7dce78995786","Type":"ContainerDied","Data":"2e5c8eedbbb5e3082bba8f889ac993791b2defde5035cdbba6331c5ad535a240"} Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.741035 4675 generic.go:334] "Generic (PLEG): container finished" podID="65890acc-153d-404d-b513-f5d0e00c43d1" containerID="9e5d982133fe5e4a4dd287ce957ddc4c720ef709e59eade5d8be6061228c35cd" exitCode=0 Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.741826 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" event={"ID":"65890acc-153d-404d-b513-f5d0e00c43d1","Type":"ContainerDied","Data":"9e5d982133fe5e4a4dd287ce957ddc4c720ef709e59eade5d8be6061228c35cd"} Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.741864 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" event={"ID":"65890acc-153d-404d-b513-f5d0e00c43d1","Type":"ContainerStarted","Data":"eb8d36f740127b2b058b8d6e5262bf83a914b0fd21a10a4ec8a3d9660f40d556"} Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.826825 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xc9np" podStartSLOduration=1.826803463 podStartE2EDuration="1.826803463s" podCreationTimestamp="2025-11-21 13:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:36.818705772 +0000 UTC m=+1473.545120509" watchObservedRunningTime="2025-11-21 13:56:36.826803463 +0000 UTC m=+1473.553218190" Nov 21 13:56:36 crc kubenswrapper[4675]: I1121 13:56:36.883800 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88ddf172-a0a8-40ea-a447-36295e8cfb52" path="/var/lib/kubelet/pods/88ddf172-a0a8-40ea-a447-36295e8cfb52/volumes" Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.284495 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.407122 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-config\") pod \"65890acc-153d-404d-b513-f5d0e00c43d1\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.407229 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-ovsdbserver-nb\") pod \"65890acc-153d-404d-b513-f5d0e00c43d1\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.407373 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjhxg\" (UniqueName: \"kubernetes.io/projected/65890acc-153d-404d-b513-f5d0e00c43d1-kube-api-access-wjhxg\") pod \"65890acc-153d-404d-b513-f5d0e00c43d1\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.407411 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-dns-svc\") pod \"65890acc-153d-404d-b513-f5d0e00c43d1\" (UID: \"65890acc-153d-404d-b513-f5d0e00c43d1\") " Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.412275 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65890acc-153d-404d-b513-f5d0e00c43d1-kube-api-access-wjhxg" (OuterVolumeSpecName: "kube-api-access-wjhxg") pod "65890acc-153d-404d-b513-f5d0e00c43d1" (UID: "65890acc-153d-404d-b513-f5d0e00c43d1"). InnerVolumeSpecName "kube-api-access-wjhxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.431032 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "65890acc-153d-404d-b513-f5d0e00c43d1" (UID: "65890acc-153d-404d-b513-f5d0e00c43d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.432421 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "65890acc-153d-404d-b513-f5d0e00c43d1" (UID: "65890acc-153d-404d-b513-f5d0e00c43d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.437947 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-config" (OuterVolumeSpecName: "config") pod "65890acc-153d-404d-b513-f5d0e00c43d1" (UID: "65890acc-153d-404d-b513-f5d0e00c43d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.510202 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjhxg\" (UniqueName: \"kubernetes.io/projected/65890acc-153d-404d-b513-f5d0e00c43d1-kube-api-access-wjhxg\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.510245 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.510257 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.510277 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65890acc-153d-404d-b513-f5d0e00c43d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.830044 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-trdrw" event={"ID":"95bb8692-22aa-4552-a633-86ccf0d7bd16","Type":"ContainerStarted","Data":"0f2a96bc3aee8c6d132ecadf5e14be0be2c4da5ee85509c85fa5e5b086bd536b"} Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.830503 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.834962 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.835324 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-dq2mh" event={"ID":"65890acc-153d-404d-b513-f5d0e00c43d1","Type":"ContainerDied","Data":"eb8d36f740127b2b058b8d6e5262bf83a914b0fd21a10a4ec8a3d9660f40d556"} Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.835384 4675 scope.go:117] "RemoveContainer" containerID="9e5d982133fe5e4a4dd287ce957ddc4c720ef709e59eade5d8be6061228c35cd" Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.857661 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-trdrw" podStartSLOduration=2.857638187 podStartE2EDuration="2.857638187s" podCreationTimestamp="2025-11-21 13:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:37.852617552 +0000 UTC m=+1474.579032279" watchObservedRunningTime="2025-11-21 13:56:37.857638187 +0000 UTC m=+1474.584052914" Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.921830 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-dq2mh"] Nov 21 13:56:37 crc kubenswrapper[4675]: I1121 13:56:37.933755 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-dq2mh"] Nov 21 13:56:38 crc kubenswrapper[4675]: I1121 13:56:38.845848 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8ae83905-939b-4ae5-bab9-993356ce17b8","Type":"ContainerStarted","Data":"3549eab9987e1f5e25f72d4646cef96742816a6a67df13469c29fbe45e960458"} Nov 21 13:56:38 crc kubenswrapper[4675]: I1121 13:56:38.846533 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 21 13:56:38 crc kubenswrapper[4675]: I1121 13:56:38.873494 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65890acc-153d-404d-b513-f5d0e00c43d1" path="/var/lib/kubelet/pods/65890acc-153d-404d-b513-f5d0e00c43d1/volumes" Nov 21 13:56:38 crc kubenswrapper[4675]: I1121 13:56:38.874326 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 21 13:56:38 crc kubenswrapper[4675]: I1121 13:56:38.874361 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e5d93705-ae99-48ab-99e3-1e225f06ab6e","Type":"ContainerStarted","Data":"5bcb6aaec22f19d9d702c5b1cc0742dc029b80e7bb07d9d190003f0a49b292e4"} Nov 21 13:56:38 crc kubenswrapper[4675]: I1121 13:56:38.874377 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e5d93705-ae99-48ab-99e3-1e225f06ab6e","Type":"ContainerStarted","Data":"0233ed81fe01e5c11478222065f28556567289e5a7ff8d356ef0a32557520560"} Nov 21 13:56:38 crc kubenswrapper[4675]: I1121 13:56:38.875243 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=18.300816548 podStartE2EDuration="44.87521665s" podCreationTimestamp="2025-11-21 13:55:54 +0000 UTC" firstStartedPulling="2025-11-21 13:56:11.05074536 +0000 UTC m=+1447.777160087" lastFinishedPulling="2025-11-21 13:56:37.625145462 +0000 UTC m=+1474.351560189" observedRunningTime="2025-11-21 13:56:38.867610871 +0000 UTC m=+1475.594025618" watchObservedRunningTime="2025-11-21 13:56:38.87521665 +0000 UTC m=+1475.601631367" Nov 21 13:56:38 crc kubenswrapper[4675]: I1121 13:56:38.917135 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.694047632 podStartE2EDuration="3.9171153s" podCreationTimestamp="2025-11-21 13:56:35 +0000 UTC" firstStartedPulling="2025-11-21 13:56:36.398625448 +0000 UTC m=+1473.125040175" lastFinishedPulling="2025-11-21 13:56:37.621693116 +0000 UTC m=+1474.348107843" observedRunningTime="2025-11-21 13:56:38.907318757 +0000 UTC m=+1475.633733484" watchObservedRunningTime="2025-11-21 13:56:38.9171153 +0000 UTC m=+1475.643530027" Nov 21 13:56:39 crc kubenswrapper[4675]: I1121 13:56:39.868222 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c9adb63e-74d2-48f6-b639-4b22def78e35","Type":"ContainerStarted","Data":"dd1143643b654a0966dc20dc1b01e7598cab3ec267b19397d2f8b9203a57c663"} Nov 21 13:56:42 crc kubenswrapper[4675]: I1121 13:56:42.704723 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 21 13:56:42 crc kubenswrapper[4675]: I1121 13:56:42.705954 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 21 13:56:42 crc kubenswrapper[4675]: I1121 13:56:42.779596 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 21 13:56:42 crc kubenswrapper[4675]: I1121 13:56:42.900194 4675 generic.go:334] "Generic (PLEG): container finished" podID="c9adb63e-74d2-48f6-b639-4b22def78e35" containerID="dd1143643b654a0966dc20dc1b01e7598cab3ec267b19397d2f8b9203a57c663" exitCode=0 Nov 21 13:56:42 crc kubenswrapper[4675]: I1121 13:56:42.900262 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c9adb63e-74d2-48f6-b639-4b22def78e35","Type":"ContainerDied","Data":"dd1143643b654a0966dc20dc1b01e7598cab3ec267b19397d2f8b9203a57c663"} Nov 21 13:56:42 crc kubenswrapper[4675]: I1121 13:56:42.977085 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 21 13:56:43 crc kubenswrapper[4675]: I1121 13:56:43.559703 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-84b94b4484-zz6mx" podUID="97705629-fb36-433b-9788-38401a60643b" containerName="console" containerID="cri-o://c93d99a3f0108b969cd23a44c37fb14145d10fb4804f1382faf7e8ecf343b681" gracePeriod=15 Nov 21 13:56:43 crc kubenswrapper[4675]: I1121 13:56:43.920636 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84b94b4484-zz6mx_97705629-fb36-433b-9788-38401a60643b/console/0.log" Nov 21 13:56:43 crc kubenswrapper[4675]: I1121 13:56:43.920952 4675 generic.go:334] "Generic (PLEG): container finished" podID="97705629-fb36-433b-9788-38401a60643b" containerID="c93d99a3f0108b969cd23a44c37fb14145d10fb4804f1382faf7e8ecf343b681" exitCode=2 Nov 21 13:56:43 crc kubenswrapper[4675]: I1121 13:56:43.921033 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84b94b4484-zz6mx" event={"ID":"97705629-fb36-433b-9788-38401a60643b","Type":"ContainerDied","Data":"c93d99a3f0108b969cd23a44c37fb14145d10fb4804f1382faf7e8ecf343b681"} Nov 21 13:56:43 crc kubenswrapper[4675]: I1121 13:56:43.931237 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c9adb63e-74d2-48f6-b639-4b22def78e35","Type":"ContainerStarted","Data":"60180508ca058db142accd369363bdf78cf2b87baa4fd35985a9403b85d02703"} Nov 21 13:56:43 crc kubenswrapper[4675]: I1121 13:56:43.934637 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6127fe70-ba8b-4093-9146-7dce78995786","Type":"ContainerStarted","Data":"2f36187ae9d2bee2eae2ba29d80d5925b06fec1f7cab412d0ae07ce22995b7fa"} Nov 21 13:56:43 crc kubenswrapper[4675]: I1121 13:56:43.957504 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371985.897293 podStartE2EDuration="50.957483361s" podCreationTimestamp="2025-11-21 13:55:53 +0000 UTC" firstStartedPulling="2025-11-21 13:56:08.751100623 +0000 UTC m=+1445.477515350" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:43.95343418 +0000 UTC m=+1480.679848917" watchObservedRunningTime="2025-11-21 13:56:43.957483361 +0000 UTC m=+1480.683898088" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.049422 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d8dc-account-create-6fzjl"] Nov 21 13:56:44 crc kubenswrapper[4675]: E1121 13:56:44.049975 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ddf172-a0a8-40ea-a447-36295e8cfb52" containerName="dnsmasq-dns" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.049991 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ddf172-a0a8-40ea-a447-36295e8cfb52" containerName="dnsmasq-dns" Nov 21 13:56:44 crc kubenswrapper[4675]: E1121 13:56:44.050011 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ddf172-a0a8-40ea-a447-36295e8cfb52" containerName="init" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.050019 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ddf172-a0a8-40ea-a447-36295e8cfb52" containerName="init" Nov 21 13:56:44 crc kubenswrapper[4675]: E1121 13:56:44.050037 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65890acc-153d-404d-b513-f5d0e00c43d1" containerName="init" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.050045 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="65890acc-153d-404d-b513-f5d0e00c43d1" containerName="init" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.050314 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="65890acc-153d-404d-b513-f5d0e00c43d1" containerName="init" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.050354 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ddf172-a0a8-40ea-a447-36295e8cfb52" containerName="dnsmasq-dns" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.051322 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8dc-account-create-6fzjl" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.053225 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.060472 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d8dc-account-create-6fzjl"] Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.093937 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84b94b4484-zz6mx_97705629-fb36-433b-9788-38401a60643b/console/0.log" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.094015 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.120966 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-pstql"] Nov 21 13:56:44 crc kubenswrapper[4675]: E1121 13:56:44.121519 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97705629-fb36-433b-9788-38401a60643b" containerName="console" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.121543 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="97705629-fb36-433b-9788-38401a60643b" containerName="console" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.121777 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="97705629-fb36-433b-9788-38401a60643b" containerName="console" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.122585 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pstql" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.140999 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pstql"] Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.166893 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-oauth-serving-cert\") pod \"97705629-fb36-433b-9788-38401a60643b\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.167867 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "97705629-fb36-433b-9788-38401a60643b" (UID: "97705629-fb36-433b-9788-38401a60643b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.168804 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw8zt\" (UniqueName: \"kubernetes.io/projected/97705629-fb36-433b-9788-38401a60643b-kube-api-access-hw8zt\") pod \"97705629-fb36-433b-9788-38401a60643b\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.168849 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-console-config\") pod \"97705629-fb36-433b-9788-38401a60643b\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.169572 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-console-config" (OuterVolumeSpecName: "console-config") pod "97705629-fb36-433b-9788-38401a60643b" (UID: "97705629-fb36-433b-9788-38401a60643b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.170112 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97705629-fb36-433b-9788-38401a60643b-console-serving-cert\") pod \"97705629-fb36-433b-9788-38401a60643b\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.170557 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-trusted-ca-bundle\") pod \"97705629-fb36-433b-9788-38401a60643b\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.170627 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-service-ca\") pod \"97705629-fb36-433b-9788-38401a60643b\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.170668 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97705629-fb36-433b-9788-38401a60643b-console-oauth-config\") pod \"97705629-fb36-433b-9788-38401a60643b\" (UID: \"97705629-fb36-433b-9788-38401a60643b\") " Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.171114 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "97705629-fb36-433b-9788-38401a60643b" (UID: "97705629-fb36-433b-9788-38401a60643b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.171243 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-service-ca" (OuterVolumeSpecName: "service-ca") pod "97705629-fb36-433b-9788-38401a60643b" (UID: "97705629-fb36-433b-9788-38401a60643b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.171413 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g57mk\" (UniqueName: \"kubernetes.io/projected/b882b154-acec-4468-84e0-bab76ab42c69-kube-api-access-g57mk\") pod \"keystone-d8dc-account-create-6fzjl\" (UID: \"b882b154-acec-4468-84e0-bab76ab42c69\") " pod="openstack/keystone-d8dc-account-create-6fzjl" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.171573 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b882b154-acec-4468-84e0-bab76ab42c69-operator-scripts\") pod \"keystone-d8dc-account-create-6fzjl\" (UID: \"b882b154-acec-4468-84e0-bab76ab42c69\") " pod="openstack/keystone-d8dc-account-create-6fzjl" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.171906 4675 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.171921 4675 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.171932 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.171940 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/97705629-fb36-433b-9788-38401a60643b-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.174348 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97705629-fb36-433b-9788-38401a60643b-kube-api-access-hw8zt" (OuterVolumeSpecName: "kube-api-access-hw8zt") pod "97705629-fb36-433b-9788-38401a60643b" (UID: "97705629-fb36-433b-9788-38401a60643b"). InnerVolumeSpecName "kube-api-access-hw8zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.175335 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97705629-fb36-433b-9788-38401a60643b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "97705629-fb36-433b-9788-38401a60643b" (UID: "97705629-fb36-433b-9788-38401a60643b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.175554 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97705629-fb36-433b-9788-38401a60643b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "97705629-fb36-433b-9788-38401a60643b" (UID: "97705629-fb36-433b-9788-38401a60643b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.274027 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2e23fca-5df9-4e92-a0c8-969fc4e1cca2-operator-scripts\") pod \"keystone-db-create-pstql\" (UID: \"a2e23fca-5df9-4e92-a0c8-969fc4e1cca2\") " pod="openstack/keystone-db-create-pstql" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.274829 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g57mk\" (UniqueName: \"kubernetes.io/projected/b882b154-acec-4468-84e0-bab76ab42c69-kube-api-access-g57mk\") pod \"keystone-d8dc-account-create-6fzjl\" (UID: \"b882b154-acec-4468-84e0-bab76ab42c69\") " pod="openstack/keystone-d8dc-account-create-6fzjl" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.274938 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jjlx\" (UniqueName: \"kubernetes.io/projected/a2e23fca-5df9-4e92-a0c8-969fc4e1cca2-kube-api-access-8jjlx\") pod \"keystone-db-create-pstql\" (UID: \"a2e23fca-5df9-4e92-a0c8-969fc4e1cca2\") " pod="openstack/keystone-db-create-pstql" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.275101 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b882b154-acec-4468-84e0-bab76ab42c69-operator-scripts\") pod \"keystone-d8dc-account-create-6fzjl\" (UID: \"b882b154-acec-4468-84e0-bab76ab42c69\") " pod="openstack/keystone-d8dc-account-create-6fzjl" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.275319 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw8zt\" (UniqueName: \"kubernetes.io/projected/97705629-fb36-433b-9788-38401a60643b-kube-api-access-hw8zt\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.275413 4675 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/97705629-fb36-433b-9788-38401a60643b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.275491 4675 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/97705629-fb36-433b-9788-38401a60643b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.276212 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b882b154-acec-4468-84e0-bab76ab42c69-operator-scripts\") pod \"keystone-d8dc-account-create-6fzjl\" (UID: \"b882b154-acec-4468-84e0-bab76ab42c69\") " pod="openstack/keystone-d8dc-account-create-6fzjl" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.294879 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g57mk\" (UniqueName: \"kubernetes.io/projected/b882b154-acec-4468-84e0-bab76ab42c69-kube-api-access-g57mk\") pod \"keystone-d8dc-account-create-6fzjl\" (UID: \"b882b154-acec-4468-84e0-bab76ab42c69\") " pod="openstack/keystone-d8dc-account-create-6fzjl" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.322169 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-568sp"] Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.323356 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-568sp" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.338009 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-568sp"] Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.377482 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jjlx\" (UniqueName: \"kubernetes.io/projected/a2e23fca-5df9-4e92-a0c8-969fc4e1cca2-kube-api-access-8jjlx\") pod \"keystone-db-create-pstql\" (UID: \"a2e23fca-5df9-4e92-a0c8-969fc4e1cca2\") " pod="openstack/keystone-db-create-pstql" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.377614 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2e23fca-5df9-4e92-a0c8-969fc4e1cca2-operator-scripts\") pod \"keystone-db-create-pstql\" (UID: \"a2e23fca-5df9-4e92-a0c8-969fc4e1cca2\") " pod="openstack/keystone-db-create-pstql" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.378619 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2e23fca-5df9-4e92-a0c8-969fc4e1cca2-operator-scripts\") pod \"keystone-db-create-pstql\" (UID: \"a2e23fca-5df9-4e92-a0c8-969fc4e1cca2\") " pod="openstack/keystone-db-create-pstql" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.401373 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8dc-account-create-6fzjl" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.405333 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.405690 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jjlx\" (UniqueName: \"kubernetes.io/projected/a2e23fca-5df9-4e92-a0c8-969fc4e1cca2-kube-api-access-8jjlx\") pod \"keystone-db-create-pstql\" (UID: \"a2e23fca-5df9-4e92-a0c8-969fc4e1cca2\") " pod="openstack/keystone-db-create-pstql" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.435210 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c3b3-account-create-nm4vf"] Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.436906 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c3b3-account-create-nm4vf" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.437646 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pstql" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.438871 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.464389 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c3b3-account-create-nm4vf"] Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.481585 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn5xw\" (UniqueName: \"kubernetes.io/projected/636cfc25-1856-420d-a4e3-d906b44c3751-kube-api-access-wn5xw\") pod \"placement-db-create-568sp\" (UID: \"636cfc25-1856-420d-a4e3-d906b44c3751\") " pod="openstack/placement-db-create-568sp" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.481816 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/636cfc25-1856-420d-a4e3-d906b44c3751-operator-scripts\") pod \"placement-db-create-568sp\" (UID: \"636cfc25-1856-420d-a4e3-d906b44c3751\") " pod="openstack/placement-db-create-568sp" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.574204 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.574656 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.586954 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d3dce6-9b93-48fb-b51e-203e3883c8cd-operator-scripts\") pod \"placement-c3b3-account-create-nm4vf\" (UID: \"e6d3dce6-9b93-48fb-b51e-203e3883c8cd\") " pod="openstack/placement-c3b3-account-create-nm4vf" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.587162 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn5xw\" (UniqueName: \"kubernetes.io/projected/636cfc25-1856-420d-a4e3-d906b44c3751-kube-api-access-wn5xw\") pod \"placement-db-create-568sp\" (UID: \"636cfc25-1856-420d-a4e3-d906b44c3751\") " pod="openstack/placement-db-create-568sp" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.587194 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7bvr\" (UniqueName: \"kubernetes.io/projected/e6d3dce6-9b93-48fb-b51e-203e3883c8cd-kube-api-access-v7bvr\") pod \"placement-c3b3-account-create-nm4vf\" (UID: \"e6d3dce6-9b93-48fb-b51e-203e3883c8cd\") " pod="openstack/placement-c3b3-account-create-nm4vf" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.587281 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/636cfc25-1856-420d-a4e3-d906b44c3751-operator-scripts\") pod \"placement-db-create-568sp\" (UID: \"636cfc25-1856-420d-a4e3-d906b44c3751\") " pod="openstack/placement-db-create-568sp" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.588232 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/636cfc25-1856-420d-a4e3-d906b44c3751-operator-scripts\") pod \"placement-db-create-568sp\" (UID: \"636cfc25-1856-420d-a4e3-d906b44c3751\") " pod="openstack/placement-db-create-568sp" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.614046 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn5xw\" (UniqueName: \"kubernetes.io/projected/636cfc25-1856-420d-a4e3-d906b44c3751-kube-api-access-wn5xw\") pod \"placement-db-create-568sp\" (UID: \"636cfc25-1856-420d-a4e3-d906b44c3751\") " pod="openstack/placement-db-create-568sp" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.671816 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-568sp" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.677174 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-8c4xz"] Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.679324 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8c4xz" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.688631 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d3dce6-9b93-48fb-b51e-203e3883c8cd-operator-scripts\") pod \"placement-c3b3-account-create-nm4vf\" (UID: \"e6d3dce6-9b93-48fb-b51e-203e3883c8cd\") " pod="openstack/placement-c3b3-account-create-nm4vf" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.689912 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d3dce6-9b93-48fb-b51e-203e3883c8cd-operator-scripts\") pod \"placement-c3b3-account-create-nm4vf\" (UID: \"e6d3dce6-9b93-48fb-b51e-203e3883c8cd\") " pod="openstack/placement-c3b3-account-create-nm4vf" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.690110 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8c4xz"] Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.690152 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7bvr\" (UniqueName: \"kubernetes.io/projected/e6d3dce6-9b93-48fb-b51e-203e3883c8cd-kube-api-access-v7bvr\") pod \"placement-c3b3-account-create-nm4vf\" (UID: \"e6d3dce6-9b93-48fb-b51e-203e3883c8cd\") " pod="openstack/placement-c3b3-account-create-nm4vf" Nov 21 13:56:44 crc kubenswrapper[4675]: I1121 13:56:44.712585 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7bvr\" (UniqueName: \"kubernetes.io/projected/e6d3dce6-9b93-48fb-b51e-203e3883c8cd-kube-api-access-v7bvr\") pod \"placement-c3b3-account-create-nm4vf\" (UID: \"e6d3dce6-9b93-48fb-b51e-203e3883c8cd\") " pod="openstack/placement-c3b3-account-create-nm4vf" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.755459 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1495-account-create-nnsn8"] Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.758712 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1495-account-create-nnsn8" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.762158 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.764788 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1495-account-create-nnsn8"] Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.766416 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c3b3-account-create-nm4vf" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.795150 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02708edd-65ef-4cc3-9d43-757a138f4028-operator-scripts\") pod \"glance-db-create-8c4xz\" (UID: \"02708edd-65ef-4cc3-9d43-757a138f4028\") " pod="openstack/glance-db-create-8c4xz" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.795254 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlhjx\" (UniqueName: \"kubernetes.io/projected/02708edd-65ef-4cc3-9d43-757a138f4028-kube-api-access-hlhjx\") pod \"glance-db-create-8c4xz\" (UID: \"02708edd-65ef-4cc3-9d43-757a138f4028\") " pod="openstack/glance-db-create-8c4xz" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.897434 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e652d6b-549f-4f1f-a082-34fb47c60cc0-operator-scripts\") pod \"glance-1495-account-create-nnsn8\" (UID: \"4e652d6b-549f-4f1f-a082-34fb47c60cc0\") " pod="openstack/glance-1495-account-create-nnsn8" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.897481 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlhjx\" (UniqueName: \"kubernetes.io/projected/02708edd-65ef-4cc3-9d43-757a138f4028-kube-api-access-hlhjx\") pod \"glance-db-create-8c4xz\" (UID: \"02708edd-65ef-4cc3-9d43-757a138f4028\") " pod="openstack/glance-db-create-8c4xz" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.897719 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p62jn\" (UniqueName: \"kubernetes.io/projected/4e652d6b-549f-4f1f-a082-34fb47c60cc0-kube-api-access-p62jn\") pod \"glance-1495-account-create-nnsn8\" (UID: \"4e652d6b-549f-4f1f-a082-34fb47c60cc0\") " pod="openstack/glance-1495-account-create-nnsn8" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.897846 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02708edd-65ef-4cc3-9d43-757a138f4028-operator-scripts\") pod \"glance-db-create-8c4xz\" (UID: \"02708edd-65ef-4cc3-9d43-757a138f4028\") " pod="openstack/glance-db-create-8c4xz" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.898828 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02708edd-65ef-4cc3-9d43-757a138f4028-operator-scripts\") pod \"glance-db-create-8c4xz\" (UID: \"02708edd-65ef-4cc3-9d43-757a138f4028\") " pod="openstack/glance-db-create-8c4xz" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.918245 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlhjx\" (UniqueName: \"kubernetes.io/projected/02708edd-65ef-4cc3-9d43-757a138f4028-kube-api-access-hlhjx\") pod \"glance-db-create-8c4xz\" (UID: \"02708edd-65ef-4cc3-9d43-757a138f4028\") " pod="openstack/glance-db-create-8c4xz" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.964917 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84b94b4484-zz6mx_97705629-fb36-433b-9788-38401a60643b/console/0.log" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.965148 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84b94b4484-zz6mx" event={"ID":"97705629-fb36-433b-9788-38401a60643b","Type":"ContainerDied","Data":"1e2b5c68ffd203250c502944f66eca84125c4ae701c40d1cfe92fa72b9f3fb86"} Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.965218 4675 scope.go:117] "RemoveContainer" containerID="c93d99a3f0108b969cd23a44c37fb14145d10fb4804f1382faf7e8ecf343b681" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.965427 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84b94b4484-zz6mx" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.971936 4675 generic.go:334] "Generic (PLEG): container finished" podID="8d25ef58-c63a-4689-9ca0-3955b0a3d1df" containerID="a22a83806ff53fda2e092623ac08ebbffd804b4823e84aa138ab79f19378f685" exitCode=0 Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:44.972028 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d25ef58-c63a-4689-9ca0-3955b0a3d1df","Type":"ContainerDied","Data":"a22a83806ff53fda2e092623ac08ebbffd804b4823e84aa138ab79f19378f685"} Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:45.001693 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e652d6b-549f-4f1f-a082-34fb47c60cc0-operator-scripts\") pod \"glance-1495-account-create-nnsn8\" (UID: \"4e652d6b-549f-4f1f-a082-34fb47c60cc0\") " pod="openstack/glance-1495-account-create-nnsn8" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:45.002179 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p62jn\" (UniqueName: \"kubernetes.io/projected/4e652d6b-549f-4f1f-a082-34fb47c60cc0-kube-api-access-p62jn\") pod \"glance-1495-account-create-nnsn8\" (UID: \"4e652d6b-549f-4f1f-a082-34fb47c60cc0\") " pod="openstack/glance-1495-account-create-nnsn8" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:45.005749 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e652d6b-549f-4f1f-a082-34fb47c60cc0-operator-scripts\") pod \"glance-1495-account-create-nnsn8\" (UID: \"4e652d6b-549f-4f1f-a082-34fb47c60cc0\") " pod="openstack/glance-1495-account-create-nnsn8" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:45.025990 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-84b94b4484-zz6mx"] Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:45.035295 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-84b94b4484-zz6mx"] Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:45.036432 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p62jn\" (UniqueName: \"kubernetes.io/projected/4e652d6b-549f-4f1f-a082-34fb47c60cc0-kube-api-access-p62jn\") pod \"glance-1495-account-create-nnsn8\" (UID: \"4e652d6b-549f-4f1f-a082-34fb47c60cc0\") " pod="openstack/glance-1495-account-create-nnsn8" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:45.119372 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8c4xz" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:45.131978 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1495-account-create-nnsn8" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:45.688040 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:45.742750 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-8q49k"] Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:45.743016 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" podUID="56cb8af6-449e-48fd-aa91-bb358634ff4a" containerName="dnsmasq-dns" containerID="cri-o://4ee62e7a8e017728171402101d21cb8b3e81ab600a6b19f54920bfaa5c892e55" gracePeriod=10 Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:45.982925 4675 generic.go:334] "Generic (PLEG): container finished" podID="56cb8af6-449e-48fd-aa91-bb358634ff4a" containerID="4ee62e7a8e017728171402101d21cb8b3e81ab600a6b19f54920bfaa5c892e55" exitCode=0 Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:45.982959 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" event={"ID":"56cb8af6-449e-48fd-aa91-bb358634ff4a","Type":"ContainerDied","Data":"4ee62e7a8e017728171402101d21cb8b3e81ab600a6b19f54920bfaa5c892e55"} Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.447776 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-wdfqd"] Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.450391 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.465842 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-wdfqd"] Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.559880 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-config\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.559942 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfflj\" (UniqueName: \"kubernetes.io/projected/7449ee56-df73-4460-bb87-337a1aab25d6-kube-api-access-xfflj\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.560026 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.560092 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.560127 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.561433 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-mmwhh"] Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.563929 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-mmwhh" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.572081 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-mmwhh"] Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.658136 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-1973-account-create-j7slr"] Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.659538 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1973-account-create-j7slr" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.662278 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caac7346-06cd-4263-b6aa-fa0ac48d8442-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-mmwhh\" (UID: \"caac7346-06cd-4263-b6aa-fa0ac48d8442\") " pod="openstack/mysqld-exporter-openstack-db-create-mmwhh" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.662353 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-config\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.662384 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5gwj\" (UniqueName: \"kubernetes.io/projected/caac7346-06cd-4263-b6aa-fa0ac48d8442-kube-api-access-w5gwj\") pod \"mysqld-exporter-openstack-db-create-mmwhh\" (UID: \"caac7346-06cd-4263-b6aa-fa0ac48d8442\") " pod="openstack/mysqld-exporter-openstack-db-create-mmwhh" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.662405 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfflj\" (UniqueName: \"kubernetes.io/projected/7449ee56-df73-4460-bb87-337a1aab25d6-kube-api-access-xfflj\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.662437 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.662467 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.662491 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.663239 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-config\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.663342 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.663994 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.664166 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.667476 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.671146 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-1973-account-create-j7slr"] Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.715993 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfflj\" (UniqueName: \"kubernetes.io/projected/7449ee56-df73-4460-bb87-337a1aab25d6-kube-api-access-xfflj\") pod \"dnsmasq-dns-b8fbc5445-wdfqd\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.766902 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caac7346-06cd-4263-b6aa-fa0ac48d8442-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-mmwhh\" (UID: \"caac7346-06cd-4263-b6aa-fa0ac48d8442\") " pod="openstack/mysqld-exporter-openstack-db-create-mmwhh" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.768435 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9glzm\" (UniqueName: \"kubernetes.io/projected/74e3998f-742b-4adb-9f05-6cb0a77065ef-kube-api-access-9glzm\") pod \"mysqld-exporter-1973-account-create-j7slr\" (UID: \"74e3998f-742b-4adb-9f05-6cb0a77065ef\") " pod="openstack/mysqld-exporter-1973-account-create-j7slr" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.768501 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74e3998f-742b-4adb-9f05-6cb0a77065ef-operator-scripts\") pod \"mysqld-exporter-1973-account-create-j7slr\" (UID: \"74e3998f-742b-4adb-9f05-6cb0a77065ef\") " pod="openstack/mysqld-exporter-1973-account-create-j7slr" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.768600 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5gwj\" (UniqueName: \"kubernetes.io/projected/caac7346-06cd-4263-b6aa-fa0ac48d8442-kube-api-access-w5gwj\") pod \"mysqld-exporter-openstack-db-create-mmwhh\" (UID: \"caac7346-06cd-4263-b6aa-fa0ac48d8442\") " pod="openstack/mysqld-exporter-openstack-db-create-mmwhh" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.768362 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caac7346-06cd-4263-b6aa-fa0ac48d8442-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-mmwhh\" (UID: \"caac7346-06cd-4263-b6aa-fa0ac48d8442\") " pod="openstack/mysqld-exporter-openstack-db-create-mmwhh" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.817521 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5gwj\" (UniqueName: \"kubernetes.io/projected/caac7346-06cd-4263-b6aa-fa0ac48d8442-kube-api-access-w5gwj\") pod \"mysqld-exporter-openstack-db-create-mmwhh\" (UID: \"caac7346-06cd-4263-b6aa-fa0ac48d8442\") " pod="openstack/mysqld-exporter-openstack-db-create-mmwhh" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.828720 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.872292 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74e3998f-742b-4adb-9f05-6cb0a77065ef-operator-scripts\") pod \"mysqld-exporter-1973-account-create-j7slr\" (UID: \"74e3998f-742b-4adb-9f05-6cb0a77065ef\") " pod="openstack/mysqld-exporter-1973-account-create-j7slr" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.872583 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9glzm\" (UniqueName: \"kubernetes.io/projected/74e3998f-742b-4adb-9f05-6cb0a77065ef-kube-api-access-9glzm\") pod \"mysqld-exporter-1973-account-create-j7slr\" (UID: \"74e3998f-742b-4adb-9f05-6cb0a77065ef\") " pod="openstack/mysqld-exporter-1973-account-create-j7slr" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.881352 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97705629-fb36-433b-9788-38401a60643b" path="/var/lib/kubelet/pods/97705629-fb36-433b-9788-38401a60643b/volumes" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.883538 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74e3998f-742b-4adb-9f05-6cb0a77065ef-operator-scripts\") pod \"mysqld-exporter-1973-account-create-j7slr\" (UID: \"74e3998f-742b-4adb-9f05-6cb0a77065ef\") " pod="openstack/mysqld-exporter-1973-account-create-j7slr" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.894718 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9glzm\" (UniqueName: \"kubernetes.io/projected/74e3998f-742b-4adb-9f05-6cb0a77065ef-kube-api-access-9glzm\") pod \"mysqld-exporter-1973-account-create-j7slr\" (UID: \"74e3998f-742b-4adb-9f05-6cb0a77065ef\") " pod="openstack/mysqld-exporter-1973-account-create-j7slr" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.916307 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-mmwhh" Nov 21 13:56:46 crc kubenswrapper[4675]: I1121 13:56:46.985529 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1973-account-create-j7slr" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.056700 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d8dc-account-create-6fzjl"] Nov 21 13:56:47 crc kubenswrapper[4675]: W1121 13:56:47.076920 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb882b154_acec_4468_84e0_bab76ab42c69.slice/crio-7b15d651ef4464f17443be7382846cd81acba6f640fe5311e383ebdd5b7bc227 WatchSource:0}: Error finding container 7b15d651ef4464f17443be7382846cd81acba6f640fe5311e383ebdd5b7bc227: Status 404 returned error can't find the container with id 7b15d651ef4464f17443be7382846cd81acba6f640fe5311e383ebdd5b7bc227 Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.082190 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d25ef58-c63a-4689-9ca0-3955b0a3d1df","Type":"ContainerStarted","Data":"e716dc77663bdc3d446a01e4770fdedae51229e4b5b9756e41a1f37f3caefc41"} Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.083176 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.112259 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6127fe70-ba8b-4093-9146-7dce78995786","Type":"ContainerStarted","Data":"df8a81103811eef23420d9038874987411ab7080c5924032b9bad9cbb32d3607"} Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.119986 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=48.497163804 podStartE2EDuration="58.119935228s" podCreationTimestamp="2025-11-21 13:55:49 +0000 UTC" firstStartedPulling="2025-11-21 13:56:00.502316545 +0000 UTC m=+1437.228731272" lastFinishedPulling="2025-11-21 13:56:10.125087969 +0000 UTC m=+1446.851502696" observedRunningTime="2025-11-21 13:56:47.114315448 +0000 UTC m=+1483.840730185" watchObservedRunningTime="2025-11-21 13:56:47.119935228 +0000 UTC m=+1483.846349965" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.163866 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.281019 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj66l\" (UniqueName: \"kubernetes.io/projected/56cb8af6-449e-48fd-aa91-bb358634ff4a-kube-api-access-hj66l\") pod \"56cb8af6-449e-48fd-aa91-bb358634ff4a\" (UID: \"56cb8af6-449e-48fd-aa91-bb358634ff4a\") " Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.281419 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56cb8af6-449e-48fd-aa91-bb358634ff4a-config\") pod \"56cb8af6-449e-48fd-aa91-bb358634ff4a\" (UID: \"56cb8af6-449e-48fd-aa91-bb358634ff4a\") " Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.281715 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56cb8af6-449e-48fd-aa91-bb358634ff4a-dns-svc\") pod \"56cb8af6-449e-48fd-aa91-bb358634ff4a\" (UID: \"56cb8af6-449e-48fd-aa91-bb358634ff4a\") " Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.323879 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56cb8af6-449e-48fd-aa91-bb358634ff4a-kube-api-access-hj66l" (OuterVolumeSpecName: "kube-api-access-hj66l") pod "56cb8af6-449e-48fd-aa91-bb358634ff4a" (UID: "56cb8af6-449e-48fd-aa91-bb358634ff4a"). InnerVolumeSpecName "kube-api-access-hj66l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.386560 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj66l\" (UniqueName: \"kubernetes.io/projected/56cb8af6-449e-48fd-aa91-bb358634ff4a-kube-api-access-hj66l\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.417950 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56cb8af6-449e-48fd-aa91-bb358634ff4a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56cb8af6-449e-48fd-aa91-bb358634ff4a" (UID: "56cb8af6-449e-48fd-aa91-bb358634ff4a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.461567 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56cb8af6-449e-48fd-aa91-bb358634ff4a-config" (OuterVolumeSpecName: "config") pod "56cb8af6-449e-48fd-aa91-bb358634ff4a" (UID: "56cb8af6-449e-48fd-aa91-bb358634ff4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.488638 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56cb8af6-449e-48fd-aa91-bb358634ff4a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.488671 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56cb8af6-449e-48fd-aa91-bb358634ff4a-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.600767 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pstql"] Nov 21 13:56:47 crc kubenswrapper[4675]: W1121 13:56:47.616205 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2e23fca_5df9_4e92_a0c8_969fc4e1cca2.slice/crio-eab9abed578930930bf9f6d4bcb475951eee834d9c9ab4fcef0f7e87ad07b51c WatchSource:0}: Error finding container eab9abed578930930bf9f6d4bcb475951eee834d9c9ab4fcef0f7e87ad07b51c: Status 404 returned error can't find the container with id eab9abed578930930bf9f6d4bcb475951eee834d9c9ab4fcef0f7e87ad07b51c Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.632916 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 21 13:56:47 crc kubenswrapper[4675]: E1121 13:56:47.633332 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56cb8af6-449e-48fd-aa91-bb358634ff4a" containerName="dnsmasq-dns" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.633344 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="56cb8af6-449e-48fd-aa91-bb358634ff4a" containerName="dnsmasq-dns" Nov 21 13:56:47 crc kubenswrapper[4675]: E1121 13:56:47.633372 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56cb8af6-449e-48fd-aa91-bb358634ff4a" containerName="init" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.633378 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="56cb8af6-449e-48fd-aa91-bb358634ff4a" containerName="init" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.633569 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="56cb8af6-449e-48fd-aa91-bb358634ff4a" containerName="dnsmasq-dns" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.659127 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.663971 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.664113 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.667362 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-z6svd" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.682287 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.688427 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.695180 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.695220 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/29cc3528-47d5-4479-85fc-37f8e53f1caf-cache\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.695245 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nb9j\" (UniqueName: \"kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-kube-api-access-8nb9j\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.695335 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.695349 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/29cc3528-47d5-4479-85fc-37f8e53f1caf-lock\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.796376 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.796739 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/29cc3528-47d5-4479-85fc-37f8e53f1caf-lock\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.796873 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.796903 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/29cc3528-47d5-4479-85fc-37f8e53f1caf-cache\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.796927 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nb9j\" (UniqueName: \"kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-kube-api-access-8nb9j\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: E1121 13:56:47.796696 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 13:56:47 crc kubenswrapper[4675]: E1121 13:56:47.797306 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 13:56:47 crc kubenswrapper[4675]: E1121 13:56:47.797351 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift podName:29cc3528-47d5-4479-85fc-37f8e53f1caf nodeName:}" failed. No retries permitted until 2025-11-21 13:56:48.297334563 +0000 UTC m=+1485.023749290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift") pod "swift-storage-0" (UID: "29cc3528-47d5-4479-85fc-37f8e53f1caf") : configmap "swift-ring-files" not found Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.798172 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/29cc3528-47d5-4479-85fc-37f8e53f1caf-cache\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.798266 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.798457 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/29cc3528-47d5-4479-85fc-37f8e53f1caf-lock\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.822725 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nb9j\" (UniqueName: \"kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-kube-api-access-8nb9j\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.855184 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c3b3-account-create-nm4vf"] Nov 21 13:56:47 crc kubenswrapper[4675]: W1121 13:56:47.868576 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e652d6b_549f_4f1f_a082_34fb47c60cc0.slice/crio-63eebfc4fe71e3f58cc048d4e3009dffca620d186a787acefdaf03ba5766aeb3 WatchSource:0}: Error finding container 63eebfc4fe71e3f58cc048d4e3009dffca620d186a787acefdaf03ba5766aeb3: Status 404 returned error can't find the container with id 63eebfc4fe71e3f58cc048d4e3009dffca620d186a787acefdaf03ba5766aeb3 Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.870880 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.873714 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1495-account-create-nnsn8"] Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.895466 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-568sp"] Nov 21 13:56:47 crc kubenswrapper[4675]: I1121 13:56:47.909648 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8c4xz"] Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.069821 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-wdfqd"] Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.088823 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-mmwhh"] Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.183118 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-568sp" event={"ID":"636cfc25-1856-420d-a4e3-d906b44c3751","Type":"ContainerStarted","Data":"a92af05efa9d3d676bdd5d39d094aafe7da5ae1392606941a4abc77c41827e8c"} Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.193348 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1495-account-create-nnsn8" event={"ID":"4e652d6b-549f-4f1f-a082-34fb47c60cc0","Type":"ContainerStarted","Data":"63eebfc4fe71e3f58cc048d4e3009dffca620d186a787acefdaf03ba5766aeb3"} Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.199027 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-mmwhh" event={"ID":"caac7346-06cd-4263-b6aa-fa0ac48d8442","Type":"ContainerStarted","Data":"f818d3166a27e09a109d4ae2bbc4df6be7f2051082e8fc0ae3325a77154ff1fb"} Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.199630 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-1973-account-create-j7slr"] Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.201665 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8c4xz" event={"ID":"02708edd-65ef-4cc3-9d43-757a138f4028","Type":"ContainerStarted","Data":"a727492eaab131eff4649e44c120824f54829c8f72f17af3bf924592afab866a"} Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.218669 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" event={"ID":"56cb8af6-449e-48fd-aa91-bb358634ff4a","Type":"ContainerDied","Data":"58100948f2bd107e1b03acb3aa535ac8f453ab98e7a7b730da66c62bfeedd6dc"} Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.218715 4675 scope.go:117] "RemoveContainer" containerID="4ee62e7a8e017728171402101d21cb8b3e81ab600a6b19f54920bfaa5c892e55" Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.218854 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-8q49k" Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.223534 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pstql" event={"ID":"a2e23fca-5df9-4e92-a0c8-969fc4e1cca2","Type":"ContainerStarted","Data":"fbcbe7a400f7e3fcb982777c98dc5f95494214fdb691b546224449f1438577e2"} Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.223586 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pstql" event={"ID":"a2e23fca-5df9-4e92-a0c8-969fc4e1cca2","Type":"ContainerStarted","Data":"eab9abed578930930bf9f6d4bcb475951eee834d9c9ab4fcef0f7e87ad07b51c"} Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.227949 4675 generic.go:334] "Generic (PLEG): container finished" podID="b882b154-acec-4468-84e0-bab76ab42c69" containerID="673412236fa63be3374f3656756261269023dd71be2cb8826ab22f4b64c72699" exitCode=0 Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.228000 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8dc-account-create-6fzjl" event={"ID":"b882b154-acec-4468-84e0-bab76ab42c69","Type":"ContainerDied","Data":"673412236fa63be3374f3656756261269023dd71be2cb8826ab22f4b64c72699"} Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.228049 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8dc-account-create-6fzjl" event={"ID":"b882b154-acec-4468-84e0-bab76ab42c69","Type":"ContainerStarted","Data":"7b15d651ef4464f17443be7382846cd81acba6f640fe5311e383ebdd5b7bc227"} Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.230920 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" event={"ID":"7449ee56-df73-4460-bb87-337a1aab25d6","Type":"ContainerStarted","Data":"f0c65d21369ddc68768f6958025bc0df2d9439aa93d6d9be50926ece7ad0f091"} Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.234622 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c3b3-account-create-nm4vf" event={"ID":"e6d3dce6-9b93-48fb-b51e-203e3883c8cd","Type":"ContainerStarted","Data":"70a2f91fd6563091c267eb3cb6784f8f03c8667a81e89a5aad3c4ad1d0f8c6a1"} Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.311629 4675 scope.go:117] "RemoveContainer" containerID="aeddce8a8bec368d6c12559dcaf8c93d8c16445e0652d421b9ec3a09170b2c20" Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.315134 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:48 crc kubenswrapper[4675]: E1121 13:56:48.315332 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 13:56:48 crc kubenswrapper[4675]: E1121 13:56:48.315365 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 13:56:48 crc kubenswrapper[4675]: E1121 13:56:48.315414 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift podName:29cc3528-47d5-4479-85fc-37f8e53f1caf nodeName:}" failed. No retries permitted until 2025-11-21 13:56:49.31539882 +0000 UTC m=+1486.041813537 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift") pod "swift-storage-0" (UID: "29cc3528-47d5-4479-85fc-37f8e53f1caf") : configmap "swift-ring-files" not found Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.334111 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-8q49k"] Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.342755 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-8q49k"] Nov 21 13:56:48 crc kubenswrapper[4675]: E1121 13:56:48.369833 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2e23fca_5df9_4e92_a0c8_969fc4e1cca2.slice/crio-fbcbe7a400f7e3fcb982777c98dc5f95494214fdb691b546224449f1438577e2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2e23fca_5df9_4e92_a0c8_969fc4e1cca2.slice/crio-conmon-fbcbe7a400f7e3fcb982777c98dc5f95494214fdb691b546224449f1438577e2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56cb8af6_449e_48fd_aa91_bb358634ff4a.slice/crio-58100948f2bd107e1b03acb3aa535ac8f453ab98e7a7b730da66c62bfeedd6dc\": RecentStats: unable to find data in memory cache]" Nov 21 13:56:48 crc kubenswrapper[4675]: E1121 13:56:48.373370 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56cb8af6_449e_48fd_aa91_bb358634ff4a.slice/crio-58100948f2bd107e1b03acb3aa535ac8f453ab98e7a7b730da66c62bfeedd6dc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2e23fca_5df9_4e92_a0c8_969fc4e1cca2.slice/crio-conmon-fbcbe7a400f7e3fcb982777c98dc5f95494214fdb691b546224449f1438577e2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2e23fca_5df9_4e92_a0c8_969fc4e1cca2.slice/crio-fbcbe7a400f7e3fcb982777c98dc5f95494214fdb691b546224449f1438577e2.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.734270 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.866967 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56cb8af6-449e-48fd-aa91-bb358634ff4a" path="/var/lib/kubelet/pods/56cb8af6-449e-48fd-aa91-bb358634ff4a/volumes" Nov 21 13:56:48 crc kubenswrapper[4675]: I1121 13:56:48.868130 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.250355 4675 generic.go:334] "Generic (PLEG): container finished" podID="a2e23fca-5df9-4e92-a0c8-969fc4e1cca2" containerID="fbcbe7a400f7e3fcb982777c98dc5f95494214fdb691b546224449f1438577e2" exitCode=0 Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.250416 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pstql" event={"ID":"a2e23fca-5df9-4e92-a0c8-969fc4e1cca2","Type":"ContainerDied","Data":"fbcbe7a400f7e3fcb982777c98dc5f95494214fdb691b546224449f1438577e2"} Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.252623 4675 generic.go:334] "Generic (PLEG): container finished" podID="4e652d6b-549f-4f1f-a082-34fb47c60cc0" containerID="ee09c8fa8357a2a90bc002b059de1a00d53db1e4bcab58e1f1f609b7f5eb901d" exitCode=0 Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.252658 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1495-account-create-nnsn8" event={"ID":"4e652d6b-549f-4f1f-a082-34fb47c60cc0","Type":"ContainerDied","Data":"ee09c8fa8357a2a90bc002b059de1a00d53db1e4bcab58e1f1f609b7f5eb901d"} Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.255588 4675 generic.go:334] "Generic (PLEG): container finished" podID="7449ee56-df73-4460-bb87-337a1aab25d6" containerID="64b3497b7f197b373fa178772c998a38d6cbc6047e0ce5f2862916bdfa42378d" exitCode=0 Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.255700 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" event={"ID":"7449ee56-df73-4460-bb87-337a1aab25d6","Type":"ContainerDied","Data":"64b3497b7f197b373fa178772c998a38d6cbc6047e0ce5f2862916bdfa42378d"} Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.259219 4675 generic.go:334] "Generic (PLEG): container finished" podID="e6d3dce6-9b93-48fb-b51e-203e3883c8cd" containerID="5f369fc34cac5033406a28cb115428cf2f11d68fc01f3f2f46434705732ee7c5" exitCode=0 Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.259297 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c3b3-account-create-nm4vf" event={"ID":"e6d3dce6-9b93-48fb-b51e-203e3883c8cd","Type":"ContainerDied","Data":"5f369fc34cac5033406a28cb115428cf2f11d68fc01f3f2f46434705732ee7c5"} Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.262592 4675 generic.go:334] "Generic (PLEG): container finished" podID="636cfc25-1856-420d-a4e3-d906b44c3751" containerID="d27c8cd239ee49134d31c0a41e767f7ed7f68ecba34264b13eb5af1bc841b9e8" exitCode=0 Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.262667 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-568sp" event={"ID":"636cfc25-1856-420d-a4e3-d906b44c3751","Type":"ContainerDied","Data":"d27c8cd239ee49134d31c0a41e767f7ed7f68ecba34264b13eb5af1bc841b9e8"} Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.266986 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1973-account-create-j7slr" event={"ID":"74e3998f-742b-4adb-9f05-6cb0a77065ef","Type":"ContainerStarted","Data":"4dad58cf810ce27540070cb3bc34a1d2d7bd84c47a22fc747320b0e579551c52"} Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.267219 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1973-account-create-j7slr" event={"ID":"74e3998f-742b-4adb-9f05-6cb0a77065ef","Type":"ContainerStarted","Data":"ec390a6f50b8ba92604a80563a2f94813bad8377d36d7a3bdd59287f67659a37"} Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.270815 4675 generic.go:334] "Generic (PLEG): container finished" podID="caac7346-06cd-4263-b6aa-fa0ac48d8442" containerID="40d9f14d72ca63a15652f3e8ccd3cd91a5bae8830ac2467973b19708838d70d3" exitCode=0 Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.271295 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-mmwhh" event={"ID":"caac7346-06cd-4263-b6aa-fa0ac48d8442","Type":"ContainerDied","Data":"40d9f14d72ca63a15652f3e8ccd3cd91a5bae8830ac2467973b19708838d70d3"} Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.283825 4675 generic.go:334] "Generic (PLEG): container finished" podID="02708edd-65ef-4cc3-9d43-757a138f4028" containerID="00049ab79945c27df254ed101e1f3f8eda94ee825df3c55fec49df16a0709369" exitCode=0 Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.284032 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8c4xz" event={"ID":"02708edd-65ef-4cc3-9d43-757a138f4028","Type":"ContainerDied","Data":"00049ab79945c27df254ed101e1f3f8eda94ee825df3c55fec49df16a0709369"} Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.311259 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-1973-account-create-j7slr" podStartSLOduration=3.311234325 podStartE2EDuration="3.311234325s" podCreationTimestamp="2025-11-21 13:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:49.302406335 +0000 UTC m=+1486.028821062" watchObservedRunningTime="2025-11-21 13:56:49.311234325 +0000 UTC m=+1486.037649052" Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.350241 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:49 crc kubenswrapper[4675]: E1121 13:56:49.351227 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 13:56:49 crc kubenswrapper[4675]: E1121 13:56:49.351252 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 13:56:49 crc kubenswrapper[4675]: E1121 13:56:49.351299 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift podName:29cc3528-47d5-4479-85fc-37f8e53f1caf nodeName:}" failed. No retries permitted until 2025-11-21 13:56:51.351280989 +0000 UTC m=+1488.077695716 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift") pod "swift-storage-0" (UID: "29cc3528-47d5-4479-85fc-37f8e53f1caf") : configmap "swift-ring-files" not found Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.796906 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pstql" Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.807406 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8dc-account-create-6fzjl" Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.871046 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2e23fca-5df9-4e92-a0c8-969fc4e1cca2-operator-scripts\") pod \"a2e23fca-5df9-4e92-a0c8-969fc4e1cca2\" (UID: \"a2e23fca-5df9-4e92-a0c8-969fc4e1cca2\") " Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.871283 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b882b154-acec-4468-84e0-bab76ab42c69-operator-scripts\") pod \"b882b154-acec-4468-84e0-bab76ab42c69\" (UID: \"b882b154-acec-4468-84e0-bab76ab42c69\") " Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.871351 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g57mk\" (UniqueName: \"kubernetes.io/projected/b882b154-acec-4468-84e0-bab76ab42c69-kube-api-access-g57mk\") pod \"b882b154-acec-4468-84e0-bab76ab42c69\" (UID: \"b882b154-acec-4468-84e0-bab76ab42c69\") " Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.871544 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e23fca-5df9-4e92-a0c8-969fc4e1cca2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a2e23fca-5df9-4e92-a0c8-969fc4e1cca2" (UID: "a2e23fca-5df9-4e92-a0c8-969fc4e1cca2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.871441 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jjlx\" (UniqueName: \"kubernetes.io/projected/a2e23fca-5df9-4e92-a0c8-969fc4e1cca2-kube-api-access-8jjlx\") pod \"a2e23fca-5df9-4e92-a0c8-969fc4e1cca2\" (UID: \"a2e23fca-5df9-4e92-a0c8-969fc4e1cca2\") " Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.871893 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b882b154-acec-4468-84e0-bab76ab42c69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b882b154-acec-4468-84e0-bab76ab42c69" (UID: "b882b154-acec-4468-84e0-bab76ab42c69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.872448 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2e23fca-5df9-4e92-a0c8-969fc4e1cca2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.872469 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b882b154-acec-4468-84e0-bab76ab42c69-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.882120 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e23fca-5df9-4e92-a0c8-969fc4e1cca2-kube-api-access-8jjlx" (OuterVolumeSpecName: "kube-api-access-8jjlx") pod "a2e23fca-5df9-4e92-a0c8-969fc4e1cca2" (UID: "a2e23fca-5df9-4e92-a0c8-969fc4e1cca2"). InnerVolumeSpecName "kube-api-access-8jjlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.882177 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b882b154-acec-4468-84e0-bab76ab42c69-kube-api-access-g57mk" (OuterVolumeSpecName: "kube-api-access-g57mk") pod "b882b154-acec-4468-84e0-bab76ab42c69" (UID: "b882b154-acec-4468-84e0-bab76ab42c69"). InnerVolumeSpecName "kube-api-access-g57mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.974168 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g57mk\" (UniqueName: \"kubernetes.io/projected/b882b154-acec-4468-84e0-bab76ab42c69-kube-api-access-g57mk\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:49 crc kubenswrapper[4675]: I1121 13:56:49.974205 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jjlx\" (UniqueName: \"kubernetes.io/projected/a2e23fca-5df9-4e92-a0c8-969fc4e1cca2-kube-api-access-8jjlx\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:50 crc kubenswrapper[4675]: I1121 13:56:50.294577 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pstql" event={"ID":"a2e23fca-5df9-4e92-a0c8-969fc4e1cca2","Type":"ContainerDied","Data":"eab9abed578930930bf9f6d4bcb475951eee834d9c9ab4fcef0f7e87ad07b51c"} Nov 21 13:56:50 crc kubenswrapper[4675]: I1121 13:56:50.294878 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eab9abed578930930bf9f6d4bcb475951eee834d9c9ab4fcef0f7e87ad07b51c" Nov 21 13:56:50 crc kubenswrapper[4675]: I1121 13:56:50.294777 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pstql" Nov 21 13:56:50 crc kubenswrapper[4675]: I1121 13:56:50.296254 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8dc-account-create-6fzjl" event={"ID":"b882b154-acec-4468-84e0-bab76ab42c69","Type":"ContainerDied","Data":"7b15d651ef4464f17443be7382846cd81acba6f640fe5311e383ebdd5b7bc227"} Nov 21 13:56:50 crc kubenswrapper[4675]: I1121 13:56:50.296282 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b15d651ef4464f17443be7382846cd81acba6f640fe5311e383ebdd5b7bc227" Nov 21 13:56:50 crc kubenswrapper[4675]: I1121 13:56:50.296330 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8dc-account-create-6fzjl" Nov 21 13:56:50 crc kubenswrapper[4675]: I1121 13:56:50.299599 4675 generic.go:334] "Generic (PLEG): container finished" podID="74e3998f-742b-4adb-9f05-6cb0a77065ef" containerID="4dad58cf810ce27540070cb3bc34a1d2d7bd84c47a22fc747320b0e579551c52" exitCode=0 Nov 21 13:56:50 crc kubenswrapper[4675]: I1121 13:56:50.299637 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1973-account-create-j7slr" event={"ID":"74e3998f-742b-4adb-9f05-6cb0a77065ef","Type":"ContainerDied","Data":"4dad58cf810ce27540070cb3bc34a1d2d7bd84c47a22fc747320b0e579551c52"} Nov 21 13:56:50 crc kubenswrapper[4675]: I1121 13:56:50.306032 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" event={"ID":"7449ee56-df73-4460-bb87-337a1aab25d6","Type":"ContainerStarted","Data":"44046f6071f4cbf34e069b862d557701102073dbd31b1ebabc2bf6d669ea1bf1"} Nov 21 13:56:50 crc kubenswrapper[4675]: I1121 13:56:50.306401 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:50 crc kubenswrapper[4675]: I1121 13:56:50.359637 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" podStartSLOduration=4.359618633 podStartE2EDuration="4.359618633s" podCreationTimestamp="2025-11-21 13:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:50.350055716 +0000 UTC m=+1487.076470443" watchObservedRunningTime="2025-11-21 13:56:50.359618633 +0000 UTC m=+1487.086033360" Nov 21 13:56:50 crc kubenswrapper[4675]: I1121 13:56:50.800121 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.407952 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:51 crc kubenswrapper[4675]: E1121 13:56:51.408154 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 13:56:51 crc kubenswrapper[4675]: E1121 13:56:51.408289 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 13:56:51 crc kubenswrapper[4675]: E1121 13:56:51.408346 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift podName:29cc3528-47d5-4479-85fc-37f8e53f1caf nodeName:}" failed. No retries permitted until 2025-11-21 13:56:55.408328101 +0000 UTC m=+1492.134742828 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift") pod "swift-storage-0" (UID: "29cc3528-47d5-4479-85fc-37f8e53f1caf") : configmap "swift-ring-files" not found Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.607949 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-6s7pj"] Nov 21 13:56:51 crc kubenswrapper[4675]: E1121 13:56:51.608897 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b882b154-acec-4468-84e0-bab76ab42c69" containerName="mariadb-account-create" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.608917 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b882b154-acec-4468-84e0-bab76ab42c69" containerName="mariadb-account-create" Nov 21 13:56:51 crc kubenswrapper[4675]: E1121 13:56:51.608969 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e23fca-5df9-4e92-a0c8-969fc4e1cca2" containerName="mariadb-database-create" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.608979 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e23fca-5df9-4e92-a0c8-969fc4e1cca2" containerName="mariadb-database-create" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.609254 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b882b154-acec-4468-84e0-bab76ab42c69" containerName="mariadb-account-create" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.609293 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e23fca-5df9-4e92-a0c8-969fc4e1cca2" containerName="mariadb-database-create" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.610259 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.612840 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.613114 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.613228 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.620153 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6s7pj"] Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.714643 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dts7s\" (UniqueName: \"kubernetes.io/projected/e01d9dde-a9f3-4efc-8997-bf3914cffde9-kube-api-access-dts7s\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.714720 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-dispersionconf\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.714776 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-swiftconf\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.714839 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e01d9dde-a9f3-4efc-8997-bf3914cffde9-etc-swift\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.714914 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-combined-ca-bundle\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.714956 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e01d9dde-a9f3-4efc-8997-bf3914cffde9-scripts\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.715025 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e01d9dde-a9f3-4efc-8997-bf3914cffde9-ring-data-devices\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.816777 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e01d9dde-a9f3-4efc-8997-bf3914cffde9-etc-swift\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.816868 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-combined-ca-bundle\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.816888 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e01d9dde-a9f3-4efc-8997-bf3914cffde9-scripts\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.816941 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e01d9dde-a9f3-4efc-8997-bf3914cffde9-ring-data-devices\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.817013 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dts7s\" (UniqueName: \"kubernetes.io/projected/e01d9dde-a9f3-4efc-8997-bf3914cffde9-kube-api-access-dts7s\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.817045 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-dispersionconf\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.817129 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-swiftconf\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.817936 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e01d9dde-a9f3-4efc-8997-bf3914cffde9-etc-swift\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.818041 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e01d9dde-a9f3-4efc-8997-bf3914cffde9-ring-data-devices\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.818407 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e01d9dde-a9f3-4efc-8997-bf3914cffde9-scripts\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.823432 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-swiftconf\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.823907 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-combined-ca-bundle\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.831481 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-dispersionconf\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.834324 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dts7s\" (UniqueName: \"kubernetes.io/projected/e01d9dde-a9f3-4efc-8997-bf3914cffde9-kube-api-access-dts7s\") pod \"swift-ring-rebalance-6s7pj\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:51 crc kubenswrapper[4675]: I1121 13:56:51.936785 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.346958 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c3b3-account-create-nm4vf" event={"ID":"e6d3dce6-9b93-48fb-b51e-203e3883c8cd","Type":"ContainerDied","Data":"70a2f91fd6563091c267eb3cb6784f8f03c8667a81e89a5aad3c4ad1d0f8c6a1"} Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.347370 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70a2f91fd6563091c267eb3cb6784f8f03c8667a81e89a5aad3c4ad1d0f8c6a1" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.349806 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8c4xz" event={"ID":"02708edd-65ef-4cc3-9d43-757a138f4028","Type":"ContainerDied","Data":"a727492eaab131eff4649e44c120824f54829c8f72f17af3bf924592afab866a"} Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.349852 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a727492eaab131eff4649e44c120824f54829c8f72f17af3bf924592afab866a" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.351961 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-568sp" event={"ID":"636cfc25-1856-420d-a4e3-d906b44c3751","Type":"ContainerDied","Data":"a92af05efa9d3d676bdd5d39d094aafe7da5ae1392606941a4abc77c41827e8c"} Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.352060 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a92af05efa9d3d676bdd5d39d094aafe7da5ae1392606941a4abc77c41827e8c" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.355356 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1973-account-create-j7slr" event={"ID":"74e3998f-742b-4adb-9f05-6cb0a77065ef","Type":"ContainerDied","Data":"ec390a6f50b8ba92604a80563a2f94813bad8377d36d7a3bdd59287f67659a37"} Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.355605 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec390a6f50b8ba92604a80563a2f94813bad8377d36d7a3bdd59287f67659a37" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.357366 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1495-account-create-nnsn8" event={"ID":"4e652d6b-549f-4f1f-a082-34fb47c60cc0","Type":"ContainerDied","Data":"63eebfc4fe71e3f58cc048d4e3009dffca620d186a787acefdaf03ba5766aeb3"} Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.357433 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63eebfc4fe71e3f58cc048d4e3009dffca620d186a787acefdaf03ba5766aeb3" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.358844 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-mmwhh" event={"ID":"caac7346-06cd-4263-b6aa-fa0ac48d8442","Type":"ContainerDied","Data":"f818d3166a27e09a109d4ae2bbc4df6be7f2051082e8fc0ae3325a77154ff1fb"} Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.358881 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f818d3166a27e09a109d4ae2bbc4df6be7f2051082e8fc0ae3325a77154ff1fb" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.474594 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8c4xz" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.493401 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-mmwhh" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.502466 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1973-account-create-j7slr" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.506287 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1495-account-create-nnsn8" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.522333 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-568sp" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.536857 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c3b3-account-create-nm4vf" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.581669 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02708edd-65ef-4cc3-9d43-757a138f4028-operator-scripts\") pod \"02708edd-65ef-4cc3-9d43-757a138f4028\" (UID: \"02708edd-65ef-4cc3-9d43-757a138f4028\") " Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.581708 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74e3998f-742b-4adb-9f05-6cb0a77065ef-operator-scripts\") pod \"74e3998f-742b-4adb-9f05-6cb0a77065ef\" (UID: \"74e3998f-742b-4adb-9f05-6cb0a77065ef\") " Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.581780 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5gwj\" (UniqueName: \"kubernetes.io/projected/caac7346-06cd-4263-b6aa-fa0ac48d8442-kube-api-access-w5gwj\") pod \"caac7346-06cd-4263-b6aa-fa0ac48d8442\" (UID: \"caac7346-06cd-4263-b6aa-fa0ac48d8442\") " Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.581802 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e652d6b-549f-4f1f-a082-34fb47c60cc0-operator-scripts\") pod \"4e652d6b-549f-4f1f-a082-34fb47c60cc0\" (UID: \"4e652d6b-549f-4f1f-a082-34fb47c60cc0\") " Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.581829 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlhjx\" (UniqueName: \"kubernetes.io/projected/02708edd-65ef-4cc3-9d43-757a138f4028-kube-api-access-hlhjx\") pod \"02708edd-65ef-4cc3-9d43-757a138f4028\" (UID: \"02708edd-65ef-4cc3-9d43-757a138f4028\") " Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.581851 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/636cfc25-1856-420d-a4e3-d906b44c3751-operator-scripts\") pod \"636cfc25-1856-420d-a4e3-d906b44c3751\" (UID: \"636cfc25-1856-420d-a4e3-d906b44c3751\") " Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.581882 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9glzm\" (UniqueName: \"kubernetes.io/projected/74e3998f-742b-4adb-9f05-6cb0a77065ef-kube-api-access-9glzm\") pod \"74e3998f-742b-4adb-9f05-6cb0a77065ef\" (UID: \"74e3998f-742b-4adb-9f05-6cb0a77065ef\") " Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.581917 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caac7346-06cd-4263-b6aa-fa0ac48d8442-operator-scripts\") pod \"caac7346-06cd-4263-b6aa-fa0ac48d8442\" (UID: \"caac7346-06cd-4263-b6aa-fa0ac48d8442\") " Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.581954 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p62jn\" (UniqueName: \"kubernetes.io/projected/4e652d6b-549f-4f1f-a082-34fb47c60cc0-kube-api-access-p62jn\") pod \"4e652d6b-549f-4f1f-a082-34fb47c60cc0\" (UID: \"4e652d6b-549f-4f1f-a082-34fb47c60cc0\") " Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.582005 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn5xw\" (UniqueName: \"kubernetes.io/projected/636cfc25-1856-420d-a4e3-d906b44c3751-kube-api-access-wn5xw\") pod \"636cfc25-1856-420d-a4e3-d906b44c3751\" (UID: \"636cfc25-1856-420d-a4e3-d906b44c3751\") " Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.582691 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e3998f-742b-4adb-9f05-6cb0a77065ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "74e3998f-742b-4adb-9f05-6cb0a77065ef" (UID: "74e3998f-742b-4adb-9f05-6cb0a77065ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.582709 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/636cfc25-1856-420d-a4e3-d906b44c3751-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "636cfc25-1856-420d-a4e3-d906b44c3751" (UID: "636cfc25-1856-420d-a4e3-d906b44c3751"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.582687 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02708edd-65ef-4cc3-9d43-757a138f4028-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02708edd-65ef-4cc3-9d43-757a138f4028" (UID: "02708edd-65ef-4cc3-9d43-757a138f4028"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.584462 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e652d6b-549f-4f1f-a082-34fb47c60cc0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e652d6b-549f-4f1f-a082-34fb47c60cc0" (UID: "4e652d6b-549f-4f1f-a082-34fb47c60cc0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.584883 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caac7346-06cd-4263-b6aa-fa0ac48d8442-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "caac7346-06cd-4263-b6aa-fa0ac48d8442" (UID: "caac7346-06cd-4263-b6aa-fa0ac48d8442"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.589117 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caac7346-06cd-4263-b6aa-fa0ac48d8442-kube-api-access-w5gwj" (OuterVolumeSpecName: "kube-api-access-w5gwj") pod "caac7346-06cd-4263-b6aa-fa0ac48d8442" (UID: "caac7346-06cd-4263-b6aa-fa0ac48d8442"). InnerVolumeSpecName "kube-api-access-w5gwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.589794 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02708edd-65ef-4cc3-9d43-757a138f4028-kube-api-access-hlhjx" (OuterVolumeSpecName: "kube-api-access-hlhjx") pod "02708edd-65ef-4cc3-9d43-757a138f4028" (UID: "02708edd-65ef-4cc3-9d43-757a138f4028"). InnerVolumeSpecName "kube-api-access-hlhjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.589925 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/636cfc25-1856-420d-a4e3-d906b44c3751-kube-api-access-wn5xw" (OuterVolumeSpecName: "kube-api-access-wn5xw") pod "636cfc25-1856-420d-a4e3-d906b44c3751" (UID: "636cfc25-1856-420d-a4e3-d906b44c3751"). InnerVolumeSpecName "kube-api-access-wn5xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.592087 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e652d6b-549f-4f1f-a082-34fb47c60cc0-kube-api-access-p62jn" (OuterVolumeSpecName: "kube-api-access-p62jn") pod "4e652d6b-549f-4f1f-a082-34fb47c60cc0" (UID: "4e652d6b-549f-4f1f-a082-34fb47c60cc0"). InnerVolumeSpecName "kube-api-access-p62jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.621625 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e3998f-742b-4adb-9f05-6cb0a77065ef-kube-api-access-9glzm" (OuterVolumeSpecName: "kube-api-access-9glzm") pod "74e3998f-742b-4adb-9f05-6cb0a77065ef" (UID: "74e3998f-742b-4adb-9f05-6cb0a77065ef"). InnerVolumeSpecName "kube-api-access-9glzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.684107 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d3dce6-9b93-48fb-b51e-203e3883c8cd-operator-scripts\") pod \"e6d3dce6-9b93-48fb-b51e-203e3883c8cd\" (UID: \"e6d3dce6-9b93-48fb-b51e-203e3883c8cd\") " Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.684186 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7bvr\" (UniqueName: \"kubernetes.io/projected/e6d3dce6-9b93-48fb-b51e-203e3883c8cd-kube-api-access-v7bvr\") pod \"e6d3dce6-9b93-48fb-b51e-203e3883c8cd\" (UID: \"e6d3dce6-9b93-48fb-b51e-203e3883c8cd\") " Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.684553 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6d3dce6-9b93-48fb-b51e-203e3883c8cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6d3dce6-9b93-48fb-b51e-203e3883c8cd" (UID: "e6d3dce6-9b93-48fb-b51e-203e3883c8cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.684801 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caac7346-06cd-4263-b6aa-fa0ac48d8442-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.684827 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p62jn\" (UniqueName: \"kubernetes.io/projected/4e652d6b-549f-4f1f-a082-34fb47c60cc0-kube-api-access-p62jn\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.684843 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn5xw\" (UniqueName: \"kubernetes.io/projected/636cfc25-1856-420d-a4e3-d906b44c3751-kube-api-access-wn5xw\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.684856 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d3dce6-9b93-48fb-b51e-203e3883c8cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.684867 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02708edd-65ef-4cc3-9d43-757a138f4028-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.684879 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74e3998f-742b-4adb-9f05-6cb0a77065ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.684890 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5gwj\" (UniqueName: \"kubernetes.io/projected/caac7346-06cd-4263-b6aa-fa0ac48d8442-kube-api-access-w5gwj\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.684902 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e652d6b-549f-4f1f-a082-34fb47c60cc0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.684913 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlhjx\" (UniqueName: \"kubernetes.io/projected/02708edd-65ef-4cc3-9d43-757a138f4028-kube-api-access-hlhjx\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.684923 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/636cfc25-1856-420d-a4e3-d906b44c3751-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.684933 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9glzm\" (UniqueName: \"kubernetes.io/projected/74e3998f-742b-4adb-9f05-6cb0a77065ef-kube-api-access-9glzm\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.687365 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d3dce6-9b93-48fb-b51e-203e3883c8cd-kube-api-access-v7bvr" (OuterVolumeSpecName: "kube-api-access-v7bvr") pod "e6d3dce6-9b93-48fb-b51e-203e3883c8cd" (UID: "e6d3dce6-9b93-48fb-b51e-203e3883c8cd"). InnerVolumeSpecName "kube-api-access-v7bvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.758128 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6s7pj"] Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.786807 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7bvr\" (UniqueName: \"kubernetes.io/projected/e6d3dce6-9b93-48fb-b51e-203e3883c8cd-kube-api-access-v7bvr\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4675]: W1121 13:56:54.885801 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode01d9dde_a9f3_4efc_8997_bf3914cffde9.slice/crio-f5527c35c1c5436bd6bcc2f21aa3e6a12a2e59146239f1cbb537c2f1fd88444b WatchSource:0}: Error finding container f5527c35c1c5436bd6bcc2f21aa3e6a12a2e59146239f1cbb537c2f1fd88444b: Status 404 returned error can't find the container with id f5527c35c1c5436bd6bcc2f21aa3e6a12a2e59146239f1cbb537c2f1fd88444b Nov 21 13:56:54 crc kubenswrapper[4675]: I1121 13:56:54.892464 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 13:56:55 crc kubenswrapper[4675]: I1121 13:56:55.368347 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6s7pj" event={"ID":"e01d9dde-a9f3-4efc-8997-bf3914cffde9","Type":"ContainerStarted","Data":"f5527c35c1c5436bd6bcc2f21aa3e6a12a2e59146239f1cbb537c2f1fd88444b"} Nov 21 13:56:55 crc kubenswrapper[4675]: I1121 13:56:55.371528 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6127fe70-ba8b-4093-9146-7dce78995786","Type":"ContainerStarted","Data":"099224db4fdfdd45257116afee84791047720642a5b3a5f07d56856ed2a9158c"} Nov 21 13:56:55 crc kubenswrapper[4675]: I1121 13:56:55.371551 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8c4xz" Nov 21 13:56:55 crc kubenswrapper[4675]: I1121 13:56:55.371599 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1495-account-create-nnsn8" Nov 21 13:56:55 crc kubenswrapper[4675]: I1121 13:56:55.371618 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-568sp" Nov 21 13:56:55 crc kubenswrapper[4675]: I1121 13:56:55.371575 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c3b3-account-create-nm4vf" Nov 21 13:56:55 crc kubenswrapper[4675]: I1121 13:56:55.371696 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-mmwhh" Nov 21 13:56:55 crc kubenswrapper[4675]: I1121 13:56:55.371864 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1973-account-create-j7slr" Nov 21 13:56:55 crc kubenswrapper[4675]: I1121 13:56:55.402268 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.563956208 podStartE2EDuration="59.4022509s" podCreationTimestamp="2025-11-21 13:55:56 +0000 UTC" firstStartedPulling="2025-11-21 13:56:11.060359159 +0000 UTC m=+1447.786773886" lastFinishedPulling="2025-11-21 13:56:54.898653851 +0000 UTC m=+1491.625068578" observedRunningTime="2025-11-21 13:56:55.398446605 +0000 UTC m=+1492.124861342" watchObservedRunningTime="2025-11-21 13:56:55.4022509 +0000 UTC m=+1492.128665627" Nov 21 13:56:55 crc kubenswrapper[4675]: I1121 13:56:55.500682 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:56:55 crc kubenswrapper[4675]: E1121 13:56:55.501406 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 13:56:55 crc kubenswrapper[4675]: E1121 13:56:55.501450 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 13:56:55 crc kubenswrapper[4675]: E1121 13:56:55.501554 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift podName:29cc3528-47d5-4479-85fc-37f8e53f1caf nodeName:}" failed. No retries permitted until 2025-11-21 13:57:03.501533086 +0000 UTC m=+1500.227947873 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift") pod "swift-storage-0" (UID: "29cc3528-47d5-4479-85fc-37f8e53f1caf") : configmap "swift-ring-files" not found Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.831291 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.942131 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq"] Nov 21 13:56:56 crc kubenswrapper[4675]: E1121 13:56:56.943404 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e652d6b-549f-4f1f-a082-34fb47c60cc0" containerName="mariadb-account-create" Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.952728 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e652d6b-549f-4f1f-a082-34fb47c60cc0" containerName="mariadb-account-create" Nov 21 13:56:56 crc kubenswrapper[4675]: E1121 13:56:56.952890 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caac7346-06cd-4263-b6aa-fa0ac48d8442" containerName="mariadb-database-create" Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.952974 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="caac7346-06cd-4263-b6aa-fa0ac48d8442" containerName="mariadb-database-create" Nov 21 13:56:56 crc kubenswrapper[4675]: E1121 13:56:56.953091 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636cfc25-1856-420d-a4e3-d906b44c3751" containerName="mariadb-database-create" Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.953177 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="636cfc25-1856-420d-a4e3-d906b44c3751" containerName="mariadb-database-create" Nov 21 13:56:56 crc kubenswrapper[4675]: E1121 13:56:56.953268 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02708edd-65ef-4cc3-9d43-757a138f4028" containerName="mariadb-database-create" Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.953345 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="02708edd-65ef-4cc3-9d43-757a138f4028" containerName="mariadb-database-create" Nov 21 13:56:56 crc kubenswrapper[4675]: E1121 13:56:56.953444 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d3dce6-9b93-48fb-b51e-203e3883c8cd" containerName="mariadb-account-create" Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.953517 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d3dce6-9b93-48fb-b51e-203e3883c8cd" containerName="mariadb-account-create" Nov 21 13:56:56 crc kubenswrapper[4675]: E1121 13:56:56.953591 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e3998f-742b-4adb-9f05-6cb0a77065ef" containerName="mariadb-account-create" Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.953660 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e3998f-742b-4adb-9f05-6cb0a77065ef" containerName="mariadb-account-create" Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.954141 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d3dce6-9b93-48fb-b51e-203e3883c8cd" containerName="mariadb-account-create" Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.954238 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="02708edd-65ef-4cc3-9d43-757a138f4028" containerName="mariadb-database-create" Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.954318 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="caac7346-06cd-4263-b6aa-fa0ac48d8442" containerName="mariadb-database-create" Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.954392 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="636cfc25-1856-420d-a4e3-d906b44c3751" containerName="mariadb-database-create" Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.954464 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e652d6b-549f-4f1f-a082-34fb47c60cc0" containerName="mariadb-account-create" Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.954531 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e3998f-742b-4adb-9f05-6cb0a77065ef" containerName="mariadb-account-create" Nov 21 13:56:56 crc kubenswrapper[4675]: I1121 13:56:56.971061 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.007722 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-trdrw"] Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.008197 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-trdrw" podUID="95bb8692-22aa-4552-a633-86ccf0d7bd16" containerName="dnsmasq-dns" containerID="cri-o://0f2a96bc3aee8c6d132ecadf5e14be0be2c4da5ee85509c85fa5e5b086bd536b" gracePeriod=10 Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.059635 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq"] Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.060523 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/f6615cb0-ff37-4c23-bd5c-5572486a0db4-kube-api-access-wkssx\") pod \"mysqld-exporter-openstack-cell1-db-create-rjxdq\" (UID: \"f6615cb0-ff37-4c23-bd5c-5572486a0db4\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.060676 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6615cb0-ff37-4c23-bd5c-5572486a0db4-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-rjxdq\" (UID: \"f6615cb0-ff37-4c23-bd5c-5572486a0db4\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.146788 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-a877-account-create-qflzk"] Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.148578 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a877-account-create-qflzk" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.152937 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.155227 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-a877-account-create-qflzk"] Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.179443 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/f6615cb0-ff37-4c23-bd5c-5572486a0db4-kube-api-access-wkssx\") pod \"mysqld-exporter-openstack-cell1-db-create-rjxdq\" (UID: \"f6615cb0-ff37-4c23-bd5c-5572486a0db4\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.179577 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6615cb0-ff37-4c23-bd5c-5572486a0db4-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-rjxdq\" (UID: \"f6615cb0-ff37-4c23-bd5c-5572486a0db4\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.180867 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6615cb0-ff37-4c23-bd5c-5572486a0db4-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-rjxdq\" (UID: \"f6615cb0-ff37-4c23-bd5c-5572486a0db4\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.206662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/f6615cb0-ff37-4c23-bd5c-5572486a0db4-kube-api-access-wkssx\") pod \"mysqld-exporter-openstack-cell1-db-create-rjxdq\" (UID: \"f6615cb0-ff37-4c23-bd5c-5572486a0db4\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.282328 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0084c6f-89fa-48fe-84f3-3d2805d9bf51-operator-scripts\") pod \"mysqld-exporter-a877-account-create-qflzk\" (UID: \"e0084c6f-89fa-48fe-84f3-3d2805d9bf51\") " pod="openstack/mysqld-exporter-a877-account-create-qflzk" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.282524 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsp84\" (UniqueName: \"kubernetes.io/projected/e0084c6f-89fa-48fe-84f3-3d2805d9bf51-kube-api-access-vsp84\") pod \"mysqld-exporter-a877-account-create-qflzk\" (UID: \"e0084c6f-89fa-48fe-84f3-3d2805d9bf51\") " pod="openstack/mysqld-exporter-a877-account-create-qflzk" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.320652 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.384628 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsp84\" (UniqueName: \"kubernetes.io/projected/e0084c6f-89fa-48fe-84f3-3d2805d9bf51-kube-api-access-vsp84\") pod \"mysqld-exporter-a877-account-create-qflzk\" (UID: \"e0084c6f-89fa-48fe-84f3-3d2805d9bf51\") " pod="openstack/mysqld-exporter-a877-account-create-qflzk" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.385360 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0084c6f-89fa-48fe-84f3-3d2805d9bf51-operator-scripts\") pod \"mysqld-exporter-a877-account-create-qflzk\" (UID: \"e0084c6f-89fa-48fe-84f3-3d2805d9bf51\") " pod="openstack/mysqld-exporter-a877-account-create-qflzk" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.386121 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0084c6f-89fa-48fe-84f3-3d2805d9bf51-operator-scripts\") pod \"mysqld-exporter-a877-account-create-qflzk\" (UID: \"e0084c6f-89fa-48fe-84f3-3d2805d9bf51\") " pod="openstack/mysqld-exporter-a877-account-create-qflzk" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.393010 4675 generic.go:334] "Generic (PLEG): container finished" podID="95bb8692-22aa-4552-a633-86ccf0d7bd16" containerID="0f2a96bc3aee8c6d132ecadf5e14be0be2c4da5ee85509c85fa5e5b086bd536b" exitCode=0 Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.393059 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-trdrw" event={"ID":"95bb8692-22aa-4552-a633-86ccf0d7bd16","Type":"ContainerDied","Data":"0f2a96bc3aee8c6d132ecadf5e14be0be2c4da5ee85509c85fa5e5b086bd536b"} Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.402236 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsp84\" (UniqueName: \"kubernetes.io/projected/e0084c6f-89fa-48fe-84f3-3d2805d9bf51-kube-api-access-vsp84\") pod \"mysqld-exporter-a877-account-create-qflzk\" (UID: \"e0084c6f-89fa-48fe-84f3-3d2805d9bf51\") " pod="openstack/mysqld-exporter-a877-account-create-qflzk" Nov 21 13:56:57 crc kubenswrapper[4675]: I1121 13:56:57.547365 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a877-account-create-qflzk" Nov 21 13:56:58 crc kubenswrapper[4675]: I1121 13:56:58.167229 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 21 13:56:58 crc kubenswrapper[4675]: I1121 13:56:58.167588 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 21 13:56:58 crc kubenswrapper[4675]: I1121 13:56:58.172652 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 21 13:56:58 crc kubenswrapper[4675]: I1121 13:56:58.402915 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.107999 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.233451 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-dns-svc\") pod \"95bb8692-22aa-4552-a633-86ccf0d7bd16\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.233553 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-ovsdbserver-nb\") pod \"95bb8692-22aa-4552-a633-86ccf0d7bd16\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.233709 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-ovsdbserver-sb\") pod \"95bb8692-22aa-4552-a633-86ccf0d7bd16\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.233756 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2z6j\" (UniqueName: \"kubernetes.io/projected/95bb8692-22aa-4552-a633-86ccf0d7bd16-kube-api-access-n2z6j\") pod \"95bb8692-22aa-4552-a633-86ccf0d7bd16\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.233777 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-config\") pod \"95bb8692-22aa-4552-a633-86ccf0d7bd16\" (UID: \"95bb8692-22aa-4552-a633-86ccf0d7bd16\") " Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.247654 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95bb8692-22aa-4552-a633-86ccf0d7bd16-kube-api-access-n2z6j" (OuterVolumeSpecName: "kube-api-access-n2z6j") pod "95bb8692-22aa-4552-a633-86ccf0d7bd16" (UID: "95bb8692-22aa-4552-a633-86ccf0d7bd16"). InnerVolumeSpecName "kube-api-access-n2z6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.332913 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95bb8692-22aa-4552-a633-86ccf0d7bd16" (UID: "95bb8692-22aa-4552-a633-86ccf0d7bd16"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.337523 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2z6j\" (UniqueName: \"kubernetes.io/projected/95bb8692-22aa-4552-a633-86ccf0d7bd16-kube-api-access-n2z6j\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.337564 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.342494 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "95bb8692-22aa-4552-a633-86ccf0d7bd16" (UID: "95bb8692-22aa-4552-a633-86ccf0d7bd16"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.343195 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-config" (OuterVolumeSpecName: "config") pod "95bb8692-22aa-4552-a633-86ccf0d7bd16" (UID: "95bb8692-22aa-4552-a633-86ccf0d7bd16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.345402 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95bb8692-22aa-4552-a633-86ccf0d7bd16" (UID: "95bb8692-22aa-4552-a633-86ccf0d7bd16"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:59 crc kubenswrapper[4675]: W1121 13:56:59.393079 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6615cb0_ff37_4c23_bd5c_5572486a0db4.slice/crio-4645db62d85ca2002d0508abb2e3ad5516b539392270ef53c804e823072aafee WatchSource:0}: Error finding container 4645db62d85ca2002d0508abb2e3ad5516b539392270ef53c804e823072aafee: Status 404 returned error can't find the container with id 4645db62d85ca2002d0508abb2e3ad5516b539392270ef53c804e823072aafee Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.394716 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq"] Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.411901 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq" event={"ID":"f6615cb0-ff37-4c23-bd5c-5572486a0db4","Type":"ContainerStarted","Data":"4645db62d85ca2002d0508abb2e3ad5516b539392270ef53c804e823072aafee"} Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.413632 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6s7pj" event={"ID":"e01d9dde-a9f3-4efc-8997-bf3914cffde9","Type":"ContainerStarted","Data":"a18544cda585f18976e12d0eca277c5e75f3fb52717c6044b9aafa597fd8f35f"} Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.415864 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-trdrw" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.415853 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-trdrw" event={"ID":"95bb8692-22aa-4552-a633-86ccf0d7bd16","Type":"ContainerDied","Data":"9c02c7d0505b99e472ecf78401eadf3e35a9f50f1f39af6105f75732fab6244a"} Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.415933 4675 scope.go:117] "RemoveContainer" containerID="0f2a96bc3aee8c6d132ecadf5e14be0be2c4da5ee85509c85fa5e5b086bd536b" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.444995 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.445446 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.445459 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95bb8692-22aa-4552-a633-86ccf0d7bd16-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.463042 4675 scope.go:117] "RemoveContainer" containerID="1c10f161a26613b83d7e5f9676b4b0cb9367c97ebc7e36d68f13b4f3023e3bbf" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.470753 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-6s7pj" podStartSLOduration=4.380867158 podStartE2EDuration="8.470736101s" podCreationTimestamp="2025-11-21 13:56:51 +0000 UTC" firstStartedPulling="2025-11-21 13:56:54.892209751 +0000 UTC m=+1491.618624478" lastFinishedPulling="2025-11-21 13:56:58.982078694 +0000 UTC m=+1495.708493421" observedRunningTime="2025-11-21 13:56:59.442010717 +0000 UTC m=+1496.168425444" watchObservedRunningTime="2025-11-21 13:56:59.470736101 +0000 UTC m=+1496.197150828" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.473782 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-trdrw"] Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.512735 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-trdrw"] Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.522284 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-l5r9b" podUID="4a15b97a-aa41-4d4d-8f75-0b3d2193eded" containerName="ovn-controller" probeResult="failure" output=< Nov 21 13:56:59 crc kubenswrapper[4675]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 21 13:56:59 crc kubenswrapper[4675]: > Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.534608 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-a877-account-create-qflzk"] Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.572455 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.598865 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-j7prf" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.821883 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-l5r9b-config-9hn2t"] Nov 21 13:56:59 crc kubenswrapper[4675]: E1121 13:56:59.822618 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95bb8692-22aa-4552-a633-86ccf0d7bd16" containerName="init" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.822636 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="95bb8692-22aa-4552-a633-86ccf0d7bd16" containerName="init" Nov 21 13:56:59 crc kubenswrapper[4675]: E1121 13:56:59.822661 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95bb8692-22aa-4552-a633-86ccf0d7bd16" containerName="dnsmasq-dns" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.822668 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="95bb8692-22aa-4552-a633-86ccf0d7bd16" containerName="dnsmasq-dns" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.822859 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="95bb8692-22aa-4552-a633-86ccf0d7bd16" containerName="dnsmasq-dns" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.823600 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.825859 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.832866 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l5r9b-config-9hn2t"] Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.975757 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-run\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.975886 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-scripts\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.975926 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb8px\" (UniqueName: \"kubernetes.io/projected/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-kube-api-access-hb8px\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.976062 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-run-ovn\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.976290 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-log-ovn\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.976370 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-additional-scripts\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.994427 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8d6qh"] Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.995971 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8d6qh" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.997566 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 21 13:56:59 crc kubenswrapper[4675]: I1121 13:56:59.999199 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bb8kq" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.005729 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8d6qh"] Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.077723 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-run-ovn\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.077792 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-config-data\") pod \"glance-db-sync-8d6qh\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.077850 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-log-ovn\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.077884 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-additional-scripts\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.077922 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-combined-ca-bundle\") pod \"glance-db-sync-8d6qh\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.077954 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-db-sync-config-data\") pod \"glance-db-sync-8d6qh\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.078021 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfg6\" (UniqueName: \"kubernetes.io/projected/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-kube-api-access-hqfg6\") pod \"glance-db-sync-8d6qh\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.078047 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-run\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.078081 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-run-ovn\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.078129 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-scripts\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.078192 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb8px\" (UniqueName: \"kubernetes.io/projected/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-kube-api-access-hb8px\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.078229 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-log-ovn\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.078403 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-run\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.079087 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-additional-scripts\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.080374 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-scripts\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.098504 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb8px\" (UniqueName: \"kubernetes.io/projected/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-kube-api-access-hb8px\") pod \"ovn-controller-l5r9b-config-9hn2t\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.180269 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-config-data\") pod \"glance-db-sync-8d6qh\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.180384 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-combined-ca-bundle\") pod \"glance-db-sync-8d6qh\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.180423 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-db-sync-config-data\") pod \"glance-db-sync-8d6qh\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.180502 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfg6\" (UniqueName: \"kubernetes.io/projected/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-kube-api-access-hqfg6\") pod \"glance-db-sync-8d6qh\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.184019 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-combined-ca-bundle\") pod \"glance-db-sync-8d6qh\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.184164 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-db-sync-config-data\") pod \"glance-db-sync-8d6qh\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.184367 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-config-data\") pod \"glance-db-sync-8d6qh\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.202934 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfg6\" (UniqueName: \"kubernetes.io/projected/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-kube-api-access-hqfg6\") pod \"glance-db-sync-8d6qh\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.263560 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.318735 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.441374 4675 generic.go:334] "Generic (PLEG): container finished" podID="f6615cb0-ff37-4c23-bd5c-5572486a0db4" containerID="586f640dbb478a252cac83be920826e464e3d45af122ed690f6414817b32085b" exitCode=0 Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.441520 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq" event={"ID":"f6615cb0-ff37-4c23-bd5c-5572486a0db4","Type":"ContainerDied","Data":"586f640dbb478a252cac83be920826e464e3d45af122ed690f6414817b32085b"} Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.448886 4675 generic.go:334] "Generic (PLEG): container finished" podID="e0084c6f-89fa-48fe-84f3-3d2805d9bf51" containerID="31af08730de271061b0e3fc25aea01bb35d25833d62356d931cd04c6a506708a" exitCode=0 Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.448971 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a877-account-create-qflzk" event={"ID":"e0084c6f-89fa-48fe-84f3-3d2805d9bf51","Type":"ContainerDied","Data":"31af08730de271061b0e3fc25aea01bb35d25833d62356d931cd04c6a506708a"} Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.449017 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a877-account-create-qflzk" event={"ID":"e0084c6f-89fa-48fe-84f3-3d2805d9bf51","Type":"ContainerStarted","Data":"abca147ed50a35398f5bffa1b30bab8e53da894d883881b78e170f5be9c4aee9"} Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.775403 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l5r9b-config-9hn2t"] Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.861522 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95bb8692-22aa-4552-a633-86ccf0d7bd16" path="/var/lib/kubelet/pods/95bb8692-22aa-4552-a633-86ccf0d7bd16/volumes" Nov 21 13:57:00 crc kubenswrapper[4675]: I1121 13:57:00.954323 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8d6qh"] Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.144208 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.144715 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6127fe70-ba8b-4093-9146-7dce78995786" containerName="prometheus" containerID="cri-o://2f36187ae9d2bee2eae2ba29d80d5925b06fec1f7cab412d0ae07ce22995b7fa" gracePeriod=600 Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.145227 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6127fe70-ba8b-4093-9146-7dce78995786" containerName="config-reloader" containerID="cri-o://df8a81103811eef23420d9038874987411ab7080c5924032b9bad9cbb32d3607" gracePeriod=600 Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.145285 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6127fe70-ba8b-4093-9146-7dce78995786" containerName="thanos-sidecar" containerID="cri-o://099224db4fdfdd45257116afee84791047720642a5b3a5f07d56856ed2a9158c" gracePeriod=600 Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.158276 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.465776 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8d6qh" event={"ID":"8e21ce4f-da1d-4f89-8f41-6bb22c247d04","Type":"ContainerStarted","Data":"8f6be9af7126203161c448eff4ebf6a3d51f9988e3e8ccaab58c4eb0f4f42fba"} Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.469412 4675 generic.go:334] "Generic (PLEG): container finished" podID="12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4" containerID="b8245c72015cfa270514a816a2f61e2c65001e3fd6f1546f6dd6a0b0924be580" exitCode=0 Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.469470 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l5r9b-config-9hn2t" event={"ID":"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4","Type":"ContainerDied","Data":"b8245c72015cfa270514a816a2f61e2c65001e3fd6f1546f6dd6a0b0924be580"} Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.469493 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l5r9b-config-9hn2t" event={"ID":"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4","Type":"ContainerStarted","Data":"c1b525c0d850e8df8cea779f61a4a461fcfaa90f255dd9493bec220236f70838"} Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.474836 4675 generic.go:334] "Generic (PLEG): container finished" podID="6127fe70-ba8b-4093-9146-7dce78995786" containerID="099224db4fdfdd45257116afee84791047720642a5b3a5f07d56856ed2a9158c" exitCode=0 Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.474872 4675 generic.go:334] "Generic (PLEG): container finished" podID="6127fe70-ba8b-4093-9146-7dce78995786" containerID="2f36187ae9d2bee2eae2ba29d80d5925b06fec1f7cab412d0ae07ce22995b7fa" exitCode=0 Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.474915 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6127fe70-ba8b-4093-9146-7dce78995786","Type":"ContainerDied","Data":"099224db4fdfdd45257116afee84791047720642a5b3a5f07d56856ed2a9158c"} Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.474943 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6127fe70-ba8b-4093-9146-7dce78995786","Type":"ContainerDied","Data":"2f36187ae9d2bee2eae2ba29d80d5925b06fec1f7cab412d0ae07ce22995b7fa"} Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.476698 4675 generic.go:334] "Generic (PLEG): container finished" podID="22bdc76a-2740-432c-a43f-e0a57fdcb2c4" containerID="1d6d46a106cd3fc5f9be1fd55be2418cdb5bfcfe23f9faec67f0aa8d972ea46d" exitCode=0 Nov 21 13:57:01 crc kubenswrapper[4675]: I1121 13:57:01.476865 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"22bdc76a-2740-432c-a43f-e0a57fdcb2c4","Type":"ContainerDied","Data":"1d6d46a106cd3fc5f9be1fd55be2418cdb5bfcfe23f9faec67f0aa8d972ea46d"} Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.214845 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a877-account-create-qflzk" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.255018 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.341766 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6615cb0-ff37-4c23-bd5c-5572486a0db4-operator-scripts\") pod \"f6615cb0-ff37-4c23-bd5c-5572486a0db4\" (UID: \"f6615cb0-ff37-4c23-bd5c-5572486a0db4\") " Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.341840 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/f6615cb0-ff37-4c23-bd5c-5572486a0db4-kube-api-access-wkssx\") pod \"f6615cb0-ff37-4c23-bd5c-5572486a0db4\" (UID: \"f6615cb0-ff37-4c23-bd5c-5572486a0db4\") " Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.341871 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0084c6f-89fa-48fe-84f3-3d2805d9bf51-operator-scripts\") pod \"e0084c6f-89fa-48fe-84f3-3d2805d9bf51\" (UID: \"e0084c6f-89fa-48fe-84f3-3d2805d9bf51\") " Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.342021 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsp84\" (UniqueName: \"kubernetes.io/projected/e0084c6f-89fa-48fe-84f3-3d2805d9bf51-kube-api-access-vsp84\") pod \"e0084c6f-89fa-48fe-84f3-3d2805d9bf51\" (UID: \"e0084c6f-89fa-48fe-84f3-3d2805d9bf51\") " Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.343838 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0084c6f-89fa-48fe-84f3-3d2805d9bf51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0084c6f-89fa-48fe-84f3-3d2805d9bf51" (UID: "e0084c6f-89fa-48fe-84f3-3d2805d9bf51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.344098 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6615cb0-ff37-4c23-bd5c-5572486a0db4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6615cb0-ff37-4c23-bd5c-5572486a0db4" (UID: "f6615cb0-ff37-4c23-bd5c-5572486a0db4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.348206 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6615cb0-ff37-4c23-bd5c-5572486a0db4-kube-api-access-wkssx" (OuterVolumeSpecName: "kube-api-access-wkssx") pod "f6615cb0-ff37-4c23-bd5c-5572486a0db4" (UID: "f6615cb0-ff37-4c23-bd5c-5572486a0db4"). InnerVolumeSpecName "kube-api-access-wkssx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.353511 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0084c6f-89fa-48fe-84f3-3d2805d9bf51-kube-api-access-vsp84" (OuterVolumeSpecName: "kube-api-access-vsp84") pod "e0084c6f-89fa-48fe-84f3-3d2805d9bf51" (UID: "e0084c6f-89fa-48fe-84f3-3d2805d9bf51"). InnerVolumeSpecName "kube-api-access-vsp84". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.370616 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.444218 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6127fe70-ba8b-4093-9146-7dce78995786-prometheus-metric-storage-rulefiles-0\") pod \"6127fe70-ba8b-4093-9146-7dce78995786\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.444591 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-thanos-prometheus-http-client-file\") pod \"6127fe70-ba8b-4093-9146-7dce78995786\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.444785 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6127fe70-ba8b-4093-9146-7dce78995786-config-out\") pod \"6127fe70-ba8b-4093-9146-7dce78995786\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.444956 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\") pod \"6127fe70-ba8b-4093-9146-7dce78995786\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.444998 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjdfl\" (UniqueName: \"kubernetes.io/projected/6127fe70-ba8b-4093-9146-7dce78995786-kube-api-access-fjdfl\") pod \"6127fe70-ba8b-4093-9146-7dce78995786\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.445049 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6127fe70-ba8b-4093-9146-7dce78995786-tls-assets\") pod \"6127fe70-ba8b-4093-9146-7dce78995786\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.445392 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6127fe70-ba8b-4093-9146-7dce78995786-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "6127fe70-ba8b-4093-9146-7dce78995786" (UID: "6127fe70-ba8b-4093-9146-7dce78995786"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.445942 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-config\") pod \"6127fe70-ba8b-4093-9146-7dce78995786\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.446107 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-web-config\") pod \"6127fe70-ba8b-4093-9146-7dce78995786\" (UID: \"6127fe70-ba8b-4093-9146-7dce78995786\") " Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.448248 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/f6615cb0-ff37-4c23-bd5c-5572486a0db4-kube-api-access-wkssx\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.448277 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0084c6f-89fa-48fe-84f3-3d2805d9bf51-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.448286 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsp84\" (UniqueName: \"kubernetes.io/projected/e0084c6f-89fa-48fe-84f3-3d2805d9bf51-kube-api-access-vsp84\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.448296 4675 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6127fe70-ba8b-4093-9146-7dce78995786-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.448307 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6615cb0-ff37-4c23-bd5c-5572486a0db4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.460532 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6127fe70-ba8b-4093-9146-7dce78995786-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "6127fe70-ba8b-4093-9146-7dce78995786" (UID: "6127fe70-ba8b-4093-9146-7dce78995786"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.460953 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "6127fe70-ba8b-4093-9146-7dce78995786" (UID: "6127fe70-ba8b-4093-9146-7dce78995786"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.461239 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6127fe70-ba8b-4093-9146-7dce78995786-config-out" (OuterVolumeSpecName: "config-out") pod "6127fe70-ba8b-4093-9146-7dce78995786" (UID: "6127fe70-ba8b-4093-9146-7dce78995786"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.461350 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6127fe70-ba8b-4093-9146-7dce78995786-kube-api-access-fjdfl" (OuterVolumeSpecName: "kube-api-access-fjdfl") pod "6127fe70-ba8b-4093-9146-7dce78995786" (UID: "6127fe70-ba8b-4093-9146-7dce78995786"). InnerVolumeSpecName "kube-api-access-fjdfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.462982 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-config" (OuterVolumeSpecName: "config") pod "6127fe70-ba8b-4093-9146-7dce78995786" (UID: "6127fe70-ba8b-4093-9146-7dce78995786"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.484460 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "6127fe70-ba8b-4093-9146-7dce78995786" (UID: "6127fe70-ba8b-4093-9146-7dce78995786"). InnerVolumeSpecName "pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.488338 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-web-config" (OuterVolumeSpecName: "web-config") pod "6127fe70-ba8b-4093-9146-7dce78995786" (UID: "6127fe70-ba8b-4093-9146-7dce78995786"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.514141 4675 generic.go:334] "Generic (PLEG): container finished" podID="6127fe70-ba8b-4093-9146-7dce78995786" containerID="df8a81103811eef23420d9038874987411ab7080c5924032b9bad9cbb32d3607" exitCode=0 Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.514257 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6127fe70-ba8b-4093-9146-7dce78995786","Type":"ContainerDied","Data":"df8a81103811eef23420d9038874987411ab7080c5924032b9bad9cbb32d3607"} Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.514290 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6127fe70-ba8b-4093-9146-7dce78995786","Type":"ContainerDied","Data":"2b70c8a0b5ef0762075e620c46356a70fa79449a584cbec7cce692184de7ab98"} Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.514311 4675 scope.go:117] "RemoveContainer" containerID="099224db4fdfdd45257116afee84791047720642a5b3a5f07d56856ed2a9158c" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.514497 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.519272 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"22bdc76a-2740-432c-a43f-e0a57fdcb2c4","Type":"ContainerStarted","Data":"d89564125adb7ca8ec871adff289d88dd828d0d4bdd18a5e3cceb25b024051a0"} Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.520246 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.522609 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a877-account-create-qflzk" event={"ID":"e0084c6f-89fa-48fe-84f3-3d2805d9bf51","Type":"ContainerDied","Data":"abca147ed50a35398f5bffa1b30bab8e53da894d883881b78e170f5be9c4aee9"} Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.522653 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abca147ed50a35398f5bffa1b30bab8e53da894d883881b78e170f5be9c4aee9" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.522713 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a877-account-create-qflzk" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.530947 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.531189 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq" event={"ID":"f6615cb0-ff37-4c23-bd5c-5572486a0db4","Type":"ContainerDied","Data":"4645db62d85ca2002d0508abb2e3ad5516b539392270ef53c804e823072aafee"} Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.531221 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4645db62d85ca2002d0508abb2e3ad5516b539392270ef53c804e823072aafee" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.545676 4675 scope.go:117] "RemoveContainer" containerID="df8a81103811eef23420d9038874987411ab7080c5924032b9bad9cbb32d3607" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.550100 4675 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6127fe70-ba8b-4093-9146-7dce78995786-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.550141 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.550152 4675 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-web-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.550176 4675 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6127fe70-ba8b-4093-9146-7dce78995786-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.550190 4675 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6127fe70-ba8b-4093-9146-7dce78995786-config-out\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.550235 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\") on node \"crc\" " Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.550251 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjdfl\" (UniqueName: \"kubernetes.io/projected/6127fe70-ba8b-4093-9146-7dce78995786-kube-api-access-fjdfl\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.566360 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=72.566333788 podStartE2EDuration="1m12.566333788s" podCreationTimestamp="2025-11-21 13:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:02.556817381 +0000 UTC m=+1499.283232128" watchObservedRunningTime="2025-11-21 13:57:02.566333788 +0000 UTC m=+1499.292748525" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.612143 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.612858 4675 scope.go:117] "RemoveContainer" containerID="2f36187ae9d2bee2eae2ba29d80d5925b06fec1f7cab412d0ae07ce22995b7fa" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.618966 4675 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.619192 4675 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3") on node "crc" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.641601 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.652584 4675 reconciler_common.go:293] "Volume detached for volume \"pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.672117 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:57:02 crc kubenswrapper[4675]: E1121 13:57:02.672613 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6127fe70-ba8b-4093-9146-7dce78995786" containerName="thanos-sidecar" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.672630 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6127fe70-ba8b-4093-9146-7dce78995786" containerName="thanos-sidecar" Nov 21 13:57:02 crc kubenswrapper[4675]: E1121 13:57:02.672654 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6127fe70-ba8b-4093-9146-7dce78995786" containerName="config-reloader" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.672663 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6127fe70-ba8b-4093-9146-7dce78995786" containerName="config-reloader" Nov 21 13:57:02 crc kubenswrapper[4675]: E1121 13:57:02.672684 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6127fe70-ba8b-4093-9146-7dce78995786" containerName="init-config-reloader" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.672696 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6127fe70-ba8b-4093-9146-7dce78995786" containerName="init-config-reloader" Nov 21 13:57:02 crc kubenswrapper[4675]: E1121 13:57:02.672712 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6127fe70-ba8b-4093-9146-7dce78995786" containerName="prometheus" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.672718 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6127fe70-ba8b-4093-9146-7dce78995786" containerName="prometheus" Nov 21 13:57:02 crc kubenswrapper[4675]: E1121 13:57:02.672729 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0084c6f-89fa-48fe-84f3-3d2805d9bf51" containerName="mariadb-account-create" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.672735 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0084c6f-89fa-48fe-84f3-3d2805d9bf51" containerName="mariadb-account-create" Nov 21 13:57:02 crc kubenswrapper[4675]: E1121 13:57:02.672755 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6615cb0-ff37-4c23-bd5c-5572486a0db4" containerName="mariadb-database-create" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.672762 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6615cb0-ff37-4c23-bd5c-5572486a0db4" containerName="mariadb-database-create" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.672967 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6127fe70-ba8b-4093-9146-7dce78995786" containerName="config-reloader" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.672982 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6615cb0-ff37-4c23-bd5c-5572486a0db4" containerName="mariadb-database-create" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.672992 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0084c6f-89fa-48fe-84f3-3d2805d9bf51" containerName="mariadb-account-create" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.673006 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6127fe70-ba8b-4093-9146-7dce78995786" containerName="thanos-sidecar" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.673018 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6127fe70-ba8b-4093-9146-7dce78995786" containerName="prometheus" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.675021 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.700592 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.700694 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.700958 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8mjkk" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.701135 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.701279 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.705903 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.713617 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.719145 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.734363 4675 scope.go:117] "RemoveContainer" containerID="2e5c8eedbbb5e3082bba8f889ac993791b2defde5035cdbba6331c5ad535a240" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.753845 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.753946 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.753978 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kfx6\" (UniqueName: \"kubernetes.io/projected/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-kube-api-access-7kfx6\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.754027 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.754052 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.754364 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.754454 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-config\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.754477 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.754515 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.754541 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.754609 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.835838 4675 scope.go:117] "RemoveContainer" containerID="099224db4fdfdd45257116afee84791047720642a5b3a5f07d56856ed2a9158c" Nov 21 13:57:02 crc kubenswrapper[4675]: E1121 13:57:02.839044 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099224db4fdfdd45257116afee84791047720642a5b3a5f07d56856ed2a9158c\": container with ID starting with 099224db4fdfdd45257116afee84791047720642a5b3a5f07d56856ed2a9158c not found: ID does not exist" containerID="099224db4fdfdd45257116afee84791047720642a5b3a5f07d56856ed2a9158c" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.839117 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099224db4fdfdd45257116afee84791047720642a5b3a5f07d56856ed2a9158c"} err="failed to get container status \"099224db4fdfdd45257116afee84791047720642a5b3a5f07d56856ed2a9158c\": rpc error: code = NotFound desc = could not find container \"099224db4fdfdd45257116afee84791047720642a5b3a5f07d56856ed2a9158c\": container with ID starting with 099224db4fdfdd45257116afee84791047720642a5b3a5f07d56856ed2a9158c not found: ID does not exist" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.839149 4675 scope.go:117] "RemoveContainer" containerID="df8a81103811eef23420d9038874987411ab7080c5924032b9bad9cbb32d3607" Nov 21 13:57:02 crc kubenswrapper[4675]: E1121 13:57:02.843735 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df8a81103811eef23420d9038874987411ab7080c5924032b9bad9cbb32d3607\": container with ID starting with df8a81103811eef23420d9038874987411ab7080c5924032b9bad9cbb32d3607 not found: ID does not exist" containerID="df8a81103811eef23420d9038874987411ab7080c5924032b9bad9cbb32d3607" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.843799 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df8a81103811eef23420d9038874987411ab7080c5924032b9bad9cbb32d3607"} err="failed to get container status \"df8a81103811eef23420d9038874987411ab7080c5924032b9bad9cbb32d3607\": rpc error: code = NotFound desc = could not find container \"df8a81103811eef23420d9038874987411ab7080c5924032b9bad9cbb32d3607\": container with ID starting with df8a81103811eef23420d9038874987411ab7080c5924032b9bad9cbb32d3607 not found: ID does not exist" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.843831 4675 scope.go:117] "RemoveContainer" containerID="2f36187ae9d2bee2eae2ba29d80d5925b06fec1f7cab412d0ae07ce22995b7fa" Nov 21 13:57:02 crc kubenswrapper[4675]: E1121 13:57:02.844632 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f36187ae9d2bee2eae2ba29d80d5925b06fec1f7cab412d0ae07ce22995b7fa\": container with ID starting with 2f36187ae9d2bee2eae2ba29d80d5925b06fec1f7cab412d0ae07ce22995b7fa not found: ID does not exist" containerID="2f36187ae9d2bee2eae2ba29d80d5925b06fec1f7cab412d0ae07ce22995b7fa" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.844668 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f36187ae9d2bee2eae2ba29d80d5925b06fec1f7cab412d0ae07ce22995b7fa"} err="failed to get container status \"2f36187ae9d2bee2eae2ba29d80d5925b06fec1f7cab412d0ae07ce22995b7fa\": rpc error: code = NotFound desc = could not find container \"2f36187ae9d2bee2eae2ba29d80d5925b06fec1f7cab412d0ae07ce22995b7fa\": container with ID starting with 2f36187ae9d2bee2eae2ba29d80d5925b06fec1f7cab412d0ae07ce22995b7fa not found: ID does not exist" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.844688 4675 scope.go:117] "RemoveContainer" containerID="2e5c8eedbbb5e3082bba8f889ac993791b2defde5035cdbba6331c5ad535a240" Nov 21 13:57:02 crc kubenswrapper[4675]: E1121 13:57:02.845049 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e5c8eedbbb5e3082bba8f889ac993791b2defde5035cdbba6331c5ad535a240\": container with ID starting with 2e5c8eedbbb5e3082bba8f889ac993791b2defde5035cdbba6331c5ad535a240 not found: ID does not exist" containerID="2e5c8eedbbb5e3082bba8f889ac993791b2defde5035cdbba6331c5ad535a240" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.845094 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e5c8eedbbb5e3082bba8f889ac993791b2defde5035cdbba6331c5ad535a240"} err="failed to get container status \"2e5c8eedbbb5e3082bba8f889ac993791b2defde5035cdbba6331c5ad535a240\": rpc error: code = NotFound desc = could not find container \"2e5c8eedbbb5e3082bba8f889ac993791b2defde5035cdbba6331c5ad535a240\": container with ID starting with 2e5c8eedbbb5e3082bba8f889ac993791b2defde5035cdbba6331c5ad535a240 not found: ID does not exist" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.856206 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-config\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.856873 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.856975 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.857015 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.857174 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.857369 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.857546 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.857590 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kfx6\" (UniqueName: \"kubernetes.io/projected/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-kube-api-access-7kfx6\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.857678 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.857713 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.857750 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.860837 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.865409 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.866567 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.875663 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.876967 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.877403 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-config\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.884911 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.887473 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.894773 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.908476 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kfx6\" (UniqueName: \"kubernetes.io/projected/ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79-kube-api-access-7kfx6\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.927277 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6127fe70-ba8b-4093-9146-7dce78995786" path="/var/lib/kubelet/pods/6127fe70-ba8b-4093-9146-7dce78995786/volumes" Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.957988 4675 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:57:02 crc kubenswrapper[4675]: I1121 13:57:02.958039 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/98ccaa3eff25f930abf25e5d04600bac866339b49645deb89c687ceef7decd24/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.181443 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08a275e8-fe31-4d51-8b32-befc02ee32d3\") pod \"prometheus-metric-storage-0\" (UID: \"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.314700 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.333257 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.484548 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb8px\" (UniqueName: \"kubernetes.io/projected/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-kube-api-access-hb8px\") pod \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.485026 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-additional-scripts\") pod \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.485182 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-scripts\") pod \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.485219 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-run\") pod \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.485294 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-run-ovn\") pod \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.485377 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-log-ovn\") pod \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\" (UID: \"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4\") " Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.486565 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4" (UID: "12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.492829 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4" (UID: "12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.492901 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-run" (OuterVolumeSpecName: "var-run") pod "12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4" (UID: "12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.492934 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4" (UID: "12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.494247 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-scripts" (OuterVolumeSpecName: "scripts") pod "12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4" (UID: "12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.498979 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-kube-api-access-hb8px" (OuterVolumeSpecName: "kube-api-access-hb8px") pod "12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4" (UID: "12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4"). InnerVolumeSpecName "kube-api-access-hb8px". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.553816 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l5r9b-config-9hn2t" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.553886 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l5r9b-config-9hn2t" event={"ID":"12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4","Type":"ContainerDied","Data":"c1b525c0d850e8df8cea779f61a4a461fcfaa90f255dd9493bec220236f70838"} Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.553923 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1b525c0d850e8df8cea779f61a4a461fcfaa90f255dd9493bec220236f70838" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.587743 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.588486 4675 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.588870 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb8px\" (UniqueName: \"kubernetes.io/projected/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-kube-api-access-hb8px\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.589056 4675 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.589134 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.589197 4675 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-run\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.589253 4675 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:03 crc kubenswrapper[4675]: E1121 13:57:03.588020 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 13:57:03 crc kubenswrapper[4675]: E1121 13:57:03.589355 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 13:57:03 crc kubenswrapper[4675]: E1121 13:57:03.589440 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift podName:29cc3528-47d5-4479-85fc-37f8e53f1caf nodeName:}" failed. No retries permitted until 2025-11-21 13:57:19.589425229 +0000 UTC m=+1516.315839956 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift") pod "swift-storage-0" (UID: "29cc3528-47d5-4479-85fc-37f8e53f1caf") : configmap "swift-ring-files" not found Nov 21 13:57:03 crc kubenswrapper[4675]: W1121 13:57:03.826967 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac0a374c_2ff5_4fa8_b6be_5d4ae4445a79.slice/crio-a03b0c8d8679851a1c4b40f3cdaf493ce2d663161334f76d5f56f7ae2957f3d5 WatchSource:0}: Error finding container a03b0c8d8679851a1c4b40f3cdaf493ce2d663161334f76d5f56f7ae2957f3d5: Status 404 returned error can't find the container with id a03b0c8d8679851a1c4b40f3cdaf493ce2d663161334f76d5f56f7ae2957f3d5 Nov 21 13:57:03 crc kubenswrapper[4675]: I1121 13:57:03.830653 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:57:04 crc kubenswrapper[4675]: I1121 13:57:04.420442 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-l5r9b-config-9hn2t"] Nov 21 13:57:04 crc kubenswrapper[4675]: I1121 13:57:04.431926 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-l5r9b-config-9hn2t"] Nov 21 13:57:04 crc kubenswrapper[4675]: I1121 13:57:04.520942 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-l5r9b" Nov 21 13:57:04 crc kubenswrapper[4675]: I1121 13:57:04.568637 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79","Type":"ContainerStarted","Data":"a03b0c8d8679851a1c4b40f3cdaf493ce2d663161334f76d5f56f7ae2957f3d5"} Nov 21 13:57:04 crc kubenswrapper[4675]: I1121 13:57:04.867133 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4" path="/var/lib/kubelet/pods/12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4/volumes" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.302723 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 13:57:07 crc kubenswrapper[4675]: E1121 13:57:07.303813 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4" containerName="ovn-config" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.303832 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4" containerName="ovn-config" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.304100 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb31f3-e4f0-47b8-9a34-14e4eccdc8b4" containerName="ovn-config" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.305026 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.308570 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.320751 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.390206 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dddea1da-ad44-4caa-8719-55dea099d456-config-data\") pod \"mysqld-exporter-0\" (UID: \"dddea1da-ad44-4caa-8719-55dea099d456\") " pod="openstack/mysqld-exporter-0" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.390454 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvhch\" (UniqueName: \"kubernetes.io/projected/dddea1da-ad44-4caa-8719-55dea099d456-kube-api-access-pvhch\") pod \"mysqld-exporter-0\" (UID: \"dddea1da-ad44-4caa-8719-55dea099d456\") " pod="openstack/mysqld-exporter-0" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.390494 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddea1da-ad44-4caa-8719-55dea099d456-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"dddea1da-ad44-4caa-8719-55dea099d456\") " pod="openstack/mysqld-exporter-0" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.492394 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvhch\" (UniqueName: \"kubernetes.io/projected/dddea1da-ad44-4caa-8719-55dea099d456-kube-api-access-pvhch\") pod \"mysqld-exporter-0\" (UID: \"dddea1da-ad44-4caa-8719-55dea099d456\") " pod="openstack/mysqld-exporter-0" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.492450 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddea1da-ad44-4caa-8719-55dea099d456-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"dddea1da-ad44-4caa-8719-55dea099d456\") " pod="openstack/mysqld-exporter-0" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.492531 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dddea1da-ad44-4caa-8719-55dea099d456-config-data\") pod \"mysqld-exporter-0\" (UID: \"dddea1da-ad44-4caa-8719-55dea099d456\") " pod="openstack/mysqld-exporter-0" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.511534 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddea1da-ad44-4caa-8719-55dea099d456-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"dddea1da-ad44-4caa-8719-55dea099d456\") " pod="openstack/mysqld-exporter-0" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.512107 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dddea1da-ad44-4caa-8719-55dea099d456-config-data\") pod \"mysqld-exporter-0\" (UID: \"dddea1da-ad44-4caa-8719-55dea099d456\") " pod="openstack/mysqld-exporter-0" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.514216 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvhch\" (UniqueName: \"kubernetes.io/projected/dddea1da-ad44-4caa-8719-55dea099d456-kube-api-access-pvhch\") pod \"mysqld-exporter-0\" (UID: \"dddea1da-ad44-4caa-8719-55dea099d456\") " pod="openstack/mysqld-exporter-0" Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.599571 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79","Type":"ContainerStarted","Data":"4fb9e5ccf871b16314d3ab733a193902aff2e28235ac6e2b5dbcf9c0a9edd4fa"} Nov 21 13:57:07 crc kubenswrapper[4675]: I1121 13:57:07.634463 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 13:57:08 crc kubenswrapper[4675]: I1121 13:57:08.131688 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 13:57:08 crc kubenswrapper[4675]: I1121 13:57:08.612233 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"dddea1da-ad44-4caa-8719-55dea099d456","Type":"ContainerStarted","Data":"12a62abd7ee93db6147a776cd92028d52ec6163443e0d87fca81fb27f10b4527"} Nov 21 13:57:10 crc kubenswrapper[4675]: I1121 13:57:10.635234 4675 generic.go:334] "Generic (PLEG): container finished" podID="e01d9dde-a9f3-4efc-8997-bf3914cffde9" containerID="a18544cda585f18976e12d0eca277c5e75f3fb52717c6044b9aafa597fd8f35f" exitCode=0 Nov 21 13:57:10 crc kubenswrapper[4675]: I1121 13:57:10.635471 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6s7pj" event={"ID":"e01d9dde-a9f3-4efc-8997-bf3914cffde9","Type":"ContainerDied","Data":"a18544cda585f18976e12d0eca277c5e75f3fb52717c6044b9aafa597fd8f35f"} Nov 21 13:57:11 crc kubenswrapper[4675]: I1121 13:57:11.474266 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 21 13:57:11 crc kubenswrapper[4675]: I1121 13:57:11.972372 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-fft96"] Nov 21 13:57:11 crc kubenswrapper[4675]: I1121 13:57:11.973984 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fft96" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.045148 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fft96"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.088447 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wj42\" (UniqueName: \"kubernetes.io/projected/a5a7071c-2a7a-431a-b580-a4f6038444b6-kube-api-access-9wj42\") pod \"cinder-db-create-fft96\" (UID: \"a5a7071c-2a7a-431a-b580-a4f6038444b6\") " pod="openstack/cinder-db-create-fft96" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.088612 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a7071c-2a7a-431a-b580-a4f6038444b6-operator-scripts\") pod \"cinder-db-create-fft96\" (UID: \"a5a7071c-2a7a-431a-b580-a4f6038444b6\") " pod="openstack/cinder-db-create-fft96" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.102271 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-6fc9-account-create-htjrd"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.104015 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6fc9-account-create-htjrd" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.110328 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.115498 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6fc9-account-create-htjrd"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.190712 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a7071c-2a7a-431a-b580-a4f6038444b6-operator-scripts\") pod \"cinder-db-create-fft96\" (UID: \"a5a7071c-2a7a-431a-b580-a4f6038444b6\") " pod="openstack/cinder-db-create-fft96" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.190833 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lnzq\" (UniqueName: \"kubernetes.io/projected/d5e5671c-9388-46fd-b5f2-7c2bc71db709-kube-api-access-5lnzq\") pod \"barbican-6fc9-account-create-htjrd\" (UID: \"d5e5671c-9388-46fd-b5f2-7c2bc71db709\") " pod="openstack/barbican-6fc9-account-create-htjrd" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.190874 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5e5671c-9388-46fd-b5f2-7c2bc71db709-operator-scripts\") pod \"barbican-6fc9-account-create-htjrd\" (UID: \"d5e5671c-9388-46fd-b5f2-7c2bc71db709\") " pod="openstack/barbican-6fc9-account-create-htjrd" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.190969 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wj42\" (UniqueName: \"kubernetes.io/projected/a5a7071c-2a7a-431a-b580-a4f6038444b6-kube-api-access-9wj42\") pod \"cinder-db-create-fft96\" (UID: \"a5a7071c-2a7a-431a-b580-a4f6038444b6\") " pod="openstack/cinder-db-create-fft96" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.192044 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a7071c-2a7a-431a-b580-a4f6038444b6-operator-scripts\") pod \"cinder-db-create-fft96\" (UID: \"a5a7071c-2a7a-431a-b580-a4f6038444b6\") " pod="openstack/cinder-db-create-fft96" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.239137 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-2lt6m"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.240868 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2lt6m" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.266974 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3930-account-create-nfddg"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.267228 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wj42\" (UniqueName: \"kubernetes.io/projected/a5a7071c-2a7a-431a-b580-a4f6038444b6-kube-api-access-9wj42\") pod \"cinder-db-create-fft96\" (UID: \"a5a7071c-2a7a-431a-b580-a4f6038444b6\") " pod="openstack/cinder-db-create-fft96" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.283086 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2lt6m"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.283260 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3930-account-create-nfddg" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.289589 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.290900 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fft96" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.298513 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lnzq\" (UniqueName: \"kubernetes.io/projected/d5e5671c-9388-46fd-b5f2-7c2bc71db709-kube-api-access-5lnzq\") pod \"barbican-6fc9-account-create-htjrd\" (UID: \"d5e5671c-9388-46fd-b5f2-7c2bc71db709\") " pod="openstack/barbican-6fc9-account-create-htjrd" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.298587 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51637b2d-14b5-4bb3-95ee-e2cafe7780e2-operator-scripts\") pod \"barbican-db-create-2lt6m\" (UID: \"51637b2d-14b5-4bb3-95ee-e2cafe7780e2\") " pod="openstack/barbican-db-create-2lt6m" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.298630 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5e5671c-9388-46fd-b5f2-7c2bc71db709-operator-scripts\") pod \"barbican-6fc9-account-create-htjrd\" (UID: \"d5e5671c-9388-46fd-b5f2-7c2bc71db709\") " pod="openstack/barbican-6fc9-account-create-htjrd" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.298893 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n859l\" (UniqueName: \"kubernetes.io/projected/51637b2d-14b5-4bb3-95ee-e2cafe7780e2-kube-api-access-n859l\") pod \"barbican-db-create-2lt6m\" (UID: \"51637b2d-14b5-4bb3-95ee-e2cafe7780e2\") " pod="openstack/barbican-db-create-2lt6m" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.299743 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5e5671c-9388-46fd-b5f2-7c2bc71db709-operator-scripts\") pod \"barbican-6fc9-account-create-htjrd\" (UID: \"d5e5671c-9388-46fd-b5f2-7c2bc71db709\") " pod="openstack/barbican-6fc9-account-create-htjrd" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.330417 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3930-account-create-nfddg"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.376448 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lnzq\" (UniqueName: \"kubernetes.io/projected/d5e5671c-9388-46fd-b5f2-7c2bc71db709-kube-api-access-5lnzq\") pod \"barbican-6fc9-account-create-htjrd\" (UID: \"d5e5671c-9388-46fd-b5f2-7c2bc71db709\") " pod="openstack/barbican-6fc9-account-create-htjrd" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.454867 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51637b2d-14b5-4bb3-95ee-e2cafe7780e2-operator-scripts\") pod \"barbican-db-create-2lt6m\" (UID: \"51637b2d-14b5-4bb3-95ee-e2cafe7780e2\") " pod="openstack/barbican-db-create-2lt6m" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.454957 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpzp5\" (UniqueName: \"kubernetes.io/projected/86850670-f736-42b5-87ed-147d7a572d73-kube-api-access-rpzp5\") pod \"cinder-3930-account-create-nfddg\" (UID: \"86850670-f736-42b5-87ed-147d7a572d73\") " pod="openstack/cinder-3930-account-create-nfddg" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.455149 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86850670-f736-42b5-87ed-147d7a572d73-operator-scripts\") pod \"cinder-3930-account-create-nfddg\" (UID: \"86850670-f736-42b5-87ed-147d7a572d73\") " pod="openstack/cinder-3930-account-create-nfddg" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.455343 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n859l\" (UniqueName: \"kubernetes.io/projected/51637b2d-14b5-4bb3-95ee-e2cafe7780e2-kube-api-access-n859l\") pod \"barbican-db-create-2lt6m\" (UID: \"51637b2d-14b5-4bb3-95ee-e2cafe7780e2\") " pod="openstack/barbican-db-create-2lt6m" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.456718 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51637b2d-14b5-4bb3-95ee-e2cafe7780e2-operator-scripts\") pod \"barbican-db-create-2lt6m\" (UID: \"51637b2d-14b5-4bb3-95ee-e2cafe7780e2\") " pod="openstack/barbican-db-create-2lt6m" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.483285 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6fc9-account-create-htjrd" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.535803 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n859l\" (UniqueName: \"kubernetes.io/projected/51637b2d-14b5-4bb3-95ee-e2cafe7780e2-kube-api-access-n859l\") pod \"barbican-db-create-2lt6m\" (UID: \"51637b2d-14b5-4bb3-95ee-e2cafe7780e2\") " pod="openstack/barbican-db-create-2lt6m" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.550132 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wrfwv"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.552024 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wrfwv" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.559402 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wrfwv"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.560754 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86850670-f736-42b5-87ed-147d7a572d73-operator-scripts\") pod \"cinder-3930-account-create-nfddg\" (UID: \"86850670-f736-42b5-87ed-147d7a572d73\") " pod="openstack/cinder-3930-account-create-nfddg" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.561027 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpzp5\" (UniqueName: \"kubernetes.io/projected/86850670-f736-42b5-87ed-147d7a572d73-kube-api-access-rpzp5\") pod \"cinder-3930-account-create-nfddg\" (UID: \"86850670-f736-42b5-87ed-147d7a572d73\") " pod="openstack/cinder-3930-account-create-nfddg" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.562959 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86850670-f736-42b5-87ed-147d7a572d73-operator-scripts\") pod \"cinder-3930-account-create-nfddg\" (UID: \"86850670-f736-42b5-87ed-147d7a572d73\") " pod="openstack/cinder-3930-account-create-nfddg" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.565942 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-gf6k8"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.567380 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gf6k8" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.570361 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tzj4r" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.570638 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.570934 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.576578 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.591540 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-gf6k8"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.603998 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpzp5\" (UniqueName: \"kubernetes.io/projected/86850670-f736-42b5-87ed-147d7a572d73-kube-api-access-rpzp5\") pod \"cinder-3930-account-create-nfddg\" (UID: \"86850670-f736-42b5-87ed-147d7a572d73\") " pod="openstack/cinder-3930-account-create-nfddg" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.645266 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-822b-account-create-khlxc"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.646980 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-822b-account-create-khlxc" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.650528 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.662686 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8wlk\" (UniqueName: \"kubernetes.io/projected/c9a259f8-81fa-445f-b17b-9d12b0114a50-kube-api-access-v8wlk\") pod \"heat-db-create-gf6k8\" (UID: \"c9a259f8-81fa-445f-b17b-9d12b0114a50\") " pod="openstack/heat-db-create-gf6k8" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.662758 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f97708f-a116-4805-b085-d887c811b56a-config-data\") pod \"keystone-db-sync-wrfwv\" (UID: \"8f97708f-a116-4805-b085-d887c811b56a\") " pod="openstack/keystone-db-sync-wrfwv" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.662790 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f97708f-a116-4805-b085-d887c811b56a-combined-ca-bundle\") pod \"keystone-db-sync-wrfwv\" (UID: \"8f97708f-a116-4805-b085-d887c811b56a\") " pod="openstack/keystone-db-sync-wrfwv" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.662805 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a259f8-81fa-445f-b17b-9d12b0114a50-operator-scripts\") pod \"heat-db-create-gf6k8\" (UID: \"c9a259f8-81fa-445f-b17b-9d12b0114a50\") " pod="openstack/heat-db-create-gf6k8" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.662858 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtvlj\" (UniqueName: \"kubernetes.io/projected/8f97708f-a116-4805-b085-d887c811b56a-kube-api-access-mtvlj\") pod \"keystone-db-sync-wrfwv\" (UID: \"8f97708f-a116-4805-b085-d887c811b56a\") " pod="openstack/keystone-db-sync-wrfwv" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.678584 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2lt6m" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.694785 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-822b-account-create-khlxc"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.726094 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3930-account-create-nfddg" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.764862 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz24p\" (UniqueName: \"kubernetes.io/projected/069dca18-b43c-4d0a-a088-bd2c11e2e8d8-kube-api-access-lz24p\") pod \"heat-822b-account-create-khlxc\" (UID: \"069dca18-b43c-4d0a-a088-bd2c11e2e8d8\") " pod="openstack/heat-822b-account-create-khlxc" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.764913 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtvlj\" (UniqueName: \"kubernetes.io/projected/8f97708f-a116-4805-b085-d887c811b56a-kube-api-access-mtvlj\") pod \"keystone-db-sync-wrfwv\" (UID: \"8f97708f-a116-4805-b085-d887c811b56a\") " pod="openstack/keystone-db-sync-wrfwv" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.765025 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8wlk\" (UniqueName: \"kubernetes.io/projected/c9a259f8-81fa-445f-b17b-9d12b0114a50-kube-api-access-v8wlk\") pod \"heat-db-create-gf6k8\" (UID: \"c9a259f8-81fa-445f-b17b-9d12b0114a50\") " pod="openstack/heat-db-create-gf6k8" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.765096 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f97708f-a116-4805-b085-d887c811b56a-config-data\") pod \"keystone-db-sync-wrfwv\" (UID: \"8f97708f-a116-4805-b085-d887c811b56a\") " pod="openstack/keystone-db-sync-wrfwv" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.765119 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/069dca18-b43c-4d0a-a088-bd2c11e2e8d8-operator-scripts\") pod \"heat-822b-account-create-khlxc\" (UID: \"069dca18-b43c-4d0a-a088-bd2c11e2e8d8\") " pod="openstack/heat-822b-account-create-khlxc" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.765141 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f97708f-a116-4805-b085-d887c811b56a-combined-ca-bundle\") pod \"keystone-db-sync-wrfwv\" (UID: \"8f97708f-a116-4805-b085-d887c811b56a\") " pod="openstack/keystone-db-sync-wrfwv" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.765156 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a259f8-81fa-445f-b17b-9d12b0114a50-operator-scripts\") pod \"heat-db-create-gf6k8\" (UID: \"c9a259f8-81fa-445f-b17b-9d12b0114a50\") " pod="openstack/heat-db-create-gf6k8" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.766371 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a259f8-81fa-445f-b17b-9d12b0114a50-operator-scripts\") pod \"heat-db-create-gf6k8\" (UID: \"c9a259f8-81fa-445f-b17b-9d12b0114a50\") " pod="openstack/heat-db-create-gf6k8" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.770538 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f97708f-a116-4805-b085-d887c811b56a-config-data\") pod \"keystone-db-sync-wrfwv\" (UID: \"8f97708f-a116-4805-b085-d887c811b56a\") " pod="openstack/keystone-db-sync-wrfwv" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.790755 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8wlk\" (UniqueName: \"kubernetes.io/projected/c9a259f8-81fa-445f-b17b-9d12b0114a50-kube-api-access-v8wlk\") pod \"heat-db-create-gf6k8\" (UID: \"c9a259f8-81fa-445f-b17b-9d12b0114a50\") " pod="openstack/heat-db-create-gf6k8" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.791154 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f97708f-a116-4805-b085-d887c811b56a-combined-ca-bundle\") pod \"keystone-db-sync-wrfwv\" (UID: \"8f97708f-a116-4805-b085-d887c811b56a\") " pod="openstack/keystone-db-sync-wrfwv" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.794228 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-szwd2"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.794465 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtvlj\" (UniqueName: \"kubernetes.io/projected/8f97708f-a116-4805-b085-d887c811b56a-kube-api-access-mtvlj\") pod \"keystone-db-sync-wrfwv\" (UID: \"8f97708f-a116-4805-b085-d887c811b56a\") " pod="openstack/keystone-db-sync-wrfwv" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.795536 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-szwd2" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.821794 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2480-account-create-94rfs"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.823553 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2480-account-create-94rfs" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.825869 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.838433 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-szwd2"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.867637 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85230bf1-fd16-4f24-9f1f-13d6a960db2f-operator-scripts\") pod \"neutron-db-create-szwd2\" (UID: \"85230bf1-fd16-4f24-9f1f-13d6a960db2f\") " pod="openstack/neutron-db-create-szwd2" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.867791 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mlsx\" (UniqueName: \"kubernetes.io/projected/85230bf1-fd16-4f24-9f1f-13d6a960db2f-kube-api-access-9mlsx\") pod \"neutron-db-create-szwd2\" (UID: \"85230bf1-fd16-4f24-9f1f-13d6a960db2f\") " pod="openstack/neutron-db-create-szwd2" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.867856 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/069dca18-b43c-4d0a-a088-bd2c11e2e8d8-operator-scripts\") pod \"heat-822b-account-create-khlxc\" (UID: \"069dca18-b43c-4d0a-a088-bd2c11e2e8d8\") " pod="openstack/heat-822b-account-create-khlxc" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.867907 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/106316ae-7bc1-4693-a06d-9d93516af3a4-operator-scripts\") pod \"neutron-2480-account-create-94rfs\" (UID: \"106316ae-7bc1-4693-a06d-9d93516af3a4\") " pod="openstack/neutron-2480-account-create-94rfs" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.867952 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kvtg\" (UniqueName: \"kubernetes.io/projected/106316ae-7bc1-4693-a06d-9d93516af3a4-kube-api-access-7kvtg\") pod \"neutron-2480-account-create-94rfs\" (UID: \"106316ae-7bc1-4693-a06d-9d93516af3a4\") " pod="openstack/neutron-2480-account-create-94rfs" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.868026 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz24p\" (UniqueName: \"kubernetes.io/projected/069dca18-b43c-4d0a-a088-bd2c11e2e8d8-kube-api-access-lz24p\") pod \"heat-822b-account-create-khlxc\" (UID: \"069dca18-b43c-4d0a-a088-bd2c11e2e8d8\") " pod="openstack/heat-822b-account-create-khlxc" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.872053 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/069dca18-b43c-4d0a-a088-bd2c11e2e8d8-operator-scripts\") pod \"heat-822b-account-create-khlxc\" (UID: \"069dca18-b43c-4d0a-a088-bd2c11e2e8d8\") " pod="openstack/heat-822b-account-create-khlxc" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.878526 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2480-account-create-94rfs"] Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.879283 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wrfwv" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.895177 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz24p\" (UniqueName: \"kubernetes.io/projected/069dca18-b43c-4d0a-a088-bd2c11e2e8d8-kube-api-access-lz24p\") pod \"heat-822b-account-create-khlxc\" (UID: \"069dca18-b43c-4d0a-a088-bd2c11e2e8d8\") " pod="openstack/heat-822b-account-create-khlxc" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.952154 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gf6k8" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.967448 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-822b-account-create-khlxc" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.969338 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mlsx\" (UniqueName: \"kubernetes.io/projected/85230bf1-fd16-4f24-9f1f-13d6a960db2f-kube-api-access-9mlsx\") pod \"neutron-db-create-szwd2\" (UID: \"85230bf1-fd16-4f24-9f1f-13d6a960db2f\") " pod="openstack/neutron-db-create-szwd2" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.969417 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/106316ae-7bc1-4693-a06d-9d93516af3a4-operator-scripts\") pod \"neutron-2480-account-create-94rfs\" (UID: \"106316ae-7bc1-4693-a06d-9d93516af3a4\") " pod="openstack/neutron-2480-account-create-94rfs" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.969468 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kvtg\" (UniqueName: \"kubernetes.io/projected/106316ae-7bc1-4693-a06d-9d93516af3a4-kube-api-access-7kvtg\") pod \"neutron-2480-account-create-94rfs\" (UID: \"106316ae-7bc1-4693-a06d-9d93516af3a4\") " pod="openstack/neutron-2480-account-create-94rfs" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.969580 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85230bf1-fd16-4f24-9f1f-13d6a960db2f-operator-scripts\") pod \"neutron-db-create-szwd2\" (UID: \"85230bf1-fd16-4f24-9f1f-13d6a960db2f\") " pod="openstack/neutron-db-create-szwd2" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.970386 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85230bf1-fd16-4f24-9f1f-13d6a960db2f-operator-scripts\") pod \"neutron-db-create-szwd2\" (UID: \"85230bf1-fd16-4f24-9f1f-13d6a960db2f\") " pod="openstack/neutron-db-create-szwd2" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.970527 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/106316ae-7bc1-4693-a06d-9d93516af3a4-operator-scripts\") pod \"neutron-2480-account-create-94rfs\" (UID: \"106316ae-7bc1-4693-a06d-9d93516af3a4\") " pod="openstack/neutron-2480-account-create-94rfs" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.988251 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kvtg\" (UniqueName: \"kubernetes.io/projected/106316ae-7bc1-4693-a06d-9d93516af3a4-kube-api-access-7kvtg\") pod \"neutron-2480-account-create-94rfs\" (UID: \"106316ae-7bc1-4693-a06d-9d93516af3a4\") " pod="openstack/neutron-2480-account-create-94rfs" Nov 21 13:57:12 crc kubenswrapper[4675]: I1121 13:57:12.990182 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mlsx\" (UniqueName: \"kubernetes.io/projected/85230bf1-fd16-4f24-9f1f-13d6a960db2f-kube-api-access-9mlsx\") pod \"neutron-db-create-szwd2\" (UID: \"85230bf1-fd16-4f24-9f1f-13d6a960db2f\") " pod="openstack/neutron-db-create-szwd2" Nov 21 13:57:13 crc kubenswrapper[4675]: I1121 13:57:13.200789 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-szwd2" Nov 21 13:57:13 crc kubenswrapper[4675]: I1121 13:57:13.206282 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2480-account-create-94rfs" Nov 21 13:57:13 crc kubenswrapper[4675]: I1121 13:57:13.693298 4675 generic.go:334] "Generic (PLEG): container finished" podID="ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79" containerID="4fb9e5ccf871b16314d3ab733a193902aff2e28235ac6e2b5dbcf9c0a9edd4fa" exitCode=0 Nov 21 13:57:13 crc kubenswrapper[4675]: I1121 13:57:13.693342 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79","Type":"ContainerDied","Data":"4fb9e5ccf871b16314d3ab733a193902aff2e28235ac6e2b5dbcf9c0a9edd4fa"} Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.535243 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.569301 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-dispersionconf\") pod \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.569359 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dts7s\" (UniqueName: \"kubernetes.io/projected/e01d9dde-a9f3-4efc-8997-bf3914cffde9-kube-api-access-dts7s\") pod \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.569417 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-swiftconf\") pod \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.569447 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-combined-ca-bundle\") pod \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.569497 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e01d9dde-a9f3-4efc-8997-bf3914cffde9-etc-swift\") pod \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.569563 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e01d9dde-a9f3-4efc-8997-bf3914cffde9-scripts\") pod \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.569593 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e01d9dde-a9f3-4efc-8997-bf3914cffde9-ring-data-devices\") pod \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\" (UID: \"e01d9dde-a9f3-4efc-8997-bf3914cffde9\") " Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.570915 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e01d9dde-a9f3-4efc-8997-bf3914cffde9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e01d9dde-a9f3-4efc-8997-bf3914cffde9" (UID: "e01d9dde-a9f3-4efc-8997-bf3914cffde9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.579357 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e01d9dde-a9f3-4efc-8997-bf3914cffde9-kube-api-access-dts7s" (OuterVolumeSpecName: "kube-api-access-dts7s") pod "e01d9dde-a9f3-4efc-8997-bf3914cffde9" (UID: "e01d9dde-a9f3-4efc-8997-bf3914cffde9"). InnerVolumeSpecName "kube-api-access-dts7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.581438 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e01d9dde-a9f3-4efc-8997-bf3914cffde9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e01d9dde-a9f3-4efc-8997-bf3914cffde9" (UID: "e01d9dde-a9f3-4efc-8997-bf3914cffde9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.587430 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e01d9dde-a9f3-4efc-8997-bf3914cffde9" (UID: "e01d9dde-a9f3-4efc-8997-bf3914cffde9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.609648 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e01d9dde-a9f3-4efc-8997-bf3914cffde9" (UID: "e01d9dde-a9f3-4efc-8997-bf3914cffde9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.612144 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e01d9dde-a9f3-4efc-8997-bf3914cffde9-scripts" (OuterVolumeSpecName: "scripts") pod "e01d9dde-a9f3-4efc-8997-bf3914cffde9" (UID: "e01d9dde-a9f3-4efc-8997-bf3914cffde9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.650751 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e01d9dde-a9f3-4efc-8997-bf3914cffde9" (UID: "e01d9dde-a9f3-4efc-8997-bf3914cffde9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.676725 4675 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.677053 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dts7s\" (UniqueName: \"kubernetes.io/projected/e01d9dde-a9f3-4efc-8997-bf3914cffde9-kube-api-access-dts7s\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.677081 4675 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.677089 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01d9dde-a9f3-4efc-8997-bf3914cffde9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.677098 4675 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e01d9dde-a9f3-4efc-8997-bf3914cffde9-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.677108 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e01d9dde-a9f3-4efc-8997-bf3914cffde9-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.677115 4675 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e01d9dde-a9f3-4efc-8997-bf3914cffde9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.741741 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6s7pj" event={"ID":"e01d9dde-a9f3-4efc-8997-bf3914cffde9","Type":"ContainerDied","Data":"f5527c35c1c5436bd6bcc2f21aa3e6a12a2e59146239f1cbb537c2f1fd88444b"} Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.741781 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5527c35c1c5436bd6bcc2f21aa3e6a12a2e59146239f1cbb537c2f1fd88444b" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.741880 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6s7pj" Nov 21 13:57:17 crc kubenswrapper[4675]: I1121 13:57:17.786472 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fft96"] Nov 21 13:57:18 crc kubenswrapper[4675]: I1121 13:57:18.358085 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-822b-account-create-khlxc"] Nov 21 13:57:18 crc kubenswrapper[4675]: I1121 13:57:18.376566 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wrfwv"] Nov 21 13:57:18 crc kubenswrapper[4675]: I1121 13:57:18.388205 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2lt6m"] Nov 21 13:57:18 crc kubenswrapper[4675]: I1121 13:57:18.396604 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6fc9-account-create-htjrd"] Nov 21 13:57:18 crc kubenswrapper[4675]: I1121 13:57:18.403964 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3930-account-create-nfddg"] Nov 21 13:57:18 crc kubenswrapper[4675]: I1121 13:57:18.411353 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-gf6k8"] Nov 21 13:57:18 crc kubenswrapper[4675]: I1121 13:57:18.438135 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2480-account-create-94rfs"] Nov 21 13:57:18 crc kubenswrapper[4675]: I1121 13:57:18.836725 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-szwd2"] Nov 21 13:57:19 crc kubenswrapper[4675]: I1121 13:57:19.621047 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:57:19 crc kubenswrapper[4675]: I1121 13:57:19.632727 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29cc3528-47d5-4479-85fc-37f8e53f1caf-etc-swift\") pod \"swift-storage-0\" (UID: \"29cc3528-47d5-4479-85fc-37f8e53f1caf\") " pod="openstack/swift-storage-0" Nov 21 13:57:19 crc kubenswrapper[4675]: I1121 13:57:19.786638 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 21 13:57:19 crc kubenswrapper[4675]: W1121 13:57:19.881492 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9a259f8_81fa_445f_b17b_9d12b0114a50.slice/crio-e953584fc1439632bcc512a27d2376c5688146728a901c133fb60c1bb063a8f4 WatchSource:0}: Error finding container e953584fc1439632bcc512a27d2376c5688146728a901c133fb60c1bb063a8f4: Status 404 returned error can't find the container with id e953584fc1439632bcc512a27d2376c5688146728a901c133fb60c1bb063a8f4 Nov 21 13:57:20 crc kubenswrapper[4675]: I1121 13:57:20.788577 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79","Type":"ContainerStarted","Data":"7c07060c05fe09ac19baf6907247a67cd1ef6cb8ac8452bed3243fdff54154d8"} Nov 21 13:57:20 crc kubenswrapper[4675]: I1121 13:57:20.789502 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-gf6k8" event={"ID":"c9a259f8-81fa-445f-b17b-9d12b0114a50","Type":"ContainerStarted","Data":"e953584fc1439632bcc512a27d2376c5688146728a901c133fb60c1bb063a8f4"} Nov 21 13:57:21 crc kubenswrapper[4675]: W1121 13:57:21.078965 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5e5671c_9388_46fd_b5f2_7c2bc71db709.slice/crio-c37225f54bfb16e577524db86b8ec8313d39e40d70f04269b6df6099b7d767d9 WatchSource:0}: Error finding container c37225f54bfb16e577524db86b8ec8313d39e40d70f04269b6df6099b7d767d9: Status 404 returned error can't find the container with id c37225f54bfb16e577524db86b8ec8313d39e40d70f04269b6df6099b7d767d9 Nov 21 13:57:21 crc kubenswrapper[4675]: W1121 13:57:21.081035 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod069dca18_b43c_4d0a_a088_bd2c11e2e8d8.slice/crio-9b8e911e54745332ff8ddc6eebbf49cac594e86cb769cd9442e6c074bdda4080 WatchSource:0}: Error finding container 9b8e911e54745332ff8ddc6eebbf49cac594e86cb769cd9442e6c074bdda4080: Status 404 returned error can't find the container with id 9b8e911e54745332ff8ddc6eebbf49cac594e86cb769cd9442e6c074bdda4080 Nov 21 13:57:21 crc kubenswrapper[4675]: W1121 13:57:21.084744 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f97708f_a116_4805_b085_d887c811b56a.slice/crio-8d0904e0cc96f39719c02273a15acb91d4a41b24139bb5fb7b0544194bb80e29 WatchSource:0}: Error finding container 8d0904e0cc96f39719c02273a15acb91d4a41b24139bb5fb7b0544194bb80e29: Status 404 returned error can't find the container with id 8d0904e0cc96f39719c02273a15acb91d4a41b24139bb5fb7b0544194bb80e29 Nov 21 13:57:21 crc kubenswrapper[4675]: W1121 13:57:21.088207 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51637b2d_14b5_4bb3_95ee_e2cafe7780e2.slice/crio-fc3ed0548e33b91eca3ac57d7f87f1f4c5af2f87c6a659680a6f3c10966a9ad0 WatchSource:0}: Error finding container fc3ed0548e33b91eca3ac57d7f87f1f4c5af2f87c6a659680a6f3c10966a9ad0: Status 404 returned error can't find the container with id fc3ed0548e33b91eca3ac57d7f87f1f4c5af2f87c6a659680a6f3c10966a9ad0 Nov 21 13:57:21 crc kubenswrapper[4675]: W1121 13:57:21.096303 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85230bf1_fd16_4f24_9f1f_13d6a960db2f.slice/crio-5b793c7c485552d86da3663df950e95bdca1e307ae7c7a6c73a7f68077537b3a WatchSource:0}: Error finding container 5b793c7c485552d86da3663df950e95bdca1e307ae7c7a6c73a7f68077537b3a: Status 404 returned error can't find the container with id 5b793c7c485552d86da3663df950e95bdca1e307ae7c7a6c73a7f68077537b3a Nov 21 13:57:21 crc kubenswrapper[4675]: W1121 13:57:21.103747 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5a7071c_2a7a_431a_b580_a4f6038444b6.slice/crio-2cd6add43e9501f698f869dae6c9e3801e58042ea33bdbc2214718cdfe03cdd3 WatchSource:0}: Error finding container 2cd6add43e9501f698f869dae6c9e3801e58042ea33bdbc2214718cdfe03cdd3: Status 404 returned error can't find the container with id 2cd6add43e9501f698f869dae6c9e3801e58042ea33bdbc2214718cdfe03cdd3 Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.357345 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pxvbz"] Nov 21 13:57:21 crc kubenswrapper[4675]: E1121 13:57:21.358647 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e01d9dde-a9f3-4efc-8997-bf3914cffde9" containerName="swift-ring-rebalance" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.358688 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e01d9dde-a9f3-4efc-8997-bf3914cffde9" containerName="swift-ring-rebalance" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.359214 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e01d9dde-a9f3-4efc-8997-bf3914cffde9" containerName="swift-ring-rebalance" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.362542 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.372234 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxvbz"] Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.455547 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwmjs\" (UniqueName: \"kubernetes.io/projected/1d02a60a-11f5-447f-a27f-b4c6a7457c26-kube-api-access-jwmjs\") pod \"redhat-marketplace-pxvbz\" (UID: \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\") " pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.455871 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d02a60a-11f5-447f-a27f-b4c6a7457c26-utilities\") pod \"redhat-marketplace-pxvbz\" (UID: \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\") " pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.455979 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d02a60a-11f5-447f-a27f-b4c6a7457c26-catalog-content\") pod \"redhat-marketplace-pxvbz\" (UID: \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\") " pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.566588 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d02a60a-11f5-447f-a27f-b4c6a7457c26-utilities\") pod \"redhat-marketplace-pxvbz\" (UID: \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\") " pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.567019 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d02a60a-11f5-447f-a27f-b4c6a7457c26-catalog-content\") pod \"redhat-marketplace-pxvbz\" (UID: \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\") " pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.567252 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwmjs\" (UniqueName: \"kubernetes.io/projected/1d02a60a-11f5-447f-a27f-b4c6a7457c26-kube-api-access-jwmjs\") pod \"redhat-marketplace-pxvbz\" (UID: \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\") " pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.567507 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d02a60a-11f5-447f-a27f-b4c6a7457c26-catalog-content\") pod \"redhat-marketplace-pxvbz\" (UID: \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\") " pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.567750 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d02a60a-11f5-447f-a27f-b4c6a7457c26-utilities\") pod \"redhat-marketplace-pxvbz\" (UID: \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\") " pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.611175 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwmjs\" (UniqueName: \"kubernetes.io/projected/1d02a60a-11f5-447f-a27f-b4c6a7457c26-kube-api-access-jwmjs\") pod \"redhat-marketplace-pxvbz\" (UID: \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\") " pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.692386 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.807571 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fft96" event={"ID":"a5a7071c-2a7a-431a-b580-a4f6038444b6","Type":"ContainerStarted","Data":"2cd6add43e9501f698f869dae6c9e3801e58042ea33bdbc2214718cdfe03cdd3"} Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.814423 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-szwd2" event={"ID":"85230bf1-fd16-4f24-9f1f-13d6a960db2f","Type":"ContainerStarted","Data":"5b793c7c485552d86da3663df950e95bdca1e307ae7c7a6c73a7f68077537b3a"} Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.819280 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2480-account-create-94rfs" event={"ID":"106316ae-7bc1-4693-a06d-9d93516af3a4","Type":"ContainerStarted","Data":"fc3c2d011afeee02df5aa6109b7a96de01564caa7efbf5a9220332d6d2b66a87"} Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.824798 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3930-account-create-nfddg" event={"ID":"86850670-f736-42b5-87ed-147d7a572d73","Type":"ContainerStarted","Data":"a76de87ec99ad260a2fd28fdde529ee797214a01eb01a55ca1b81c5d8eef1c76"} Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.828853 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wrfwv" event={"ID":"8f97708f-a116-4805-b085-d887c811b56a","Type":"ContainerStarted","Data":"8d0904e0cc96f39719c02273a15acb91d4a41b24139bb5fb7b0544194bb80e29"} Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.842180 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-822b-account-create-khlxc" event={"ID":"069dca18-b43c-4d0a-a088-bd2c11e2e8d8","Type":"ContainerStarted","Data":"9b8e911e54745332ff8ddc6eebbf49cac594e86cb769cd9442e6c074bdda4080"} Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.848295 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-3930-account-create-nfddg" podStartSLOduration=9.848267881 podStartE2EDuration="9.848267881s" podCreationTimestamp="2025-11-21 13:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:21.842473297 +0000 UTC m=+1518.568888034" watchObservedRunningTime="2025-11-21 13:57:21.848267881 +0000 UTC m=+1518.574682608" Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.857007 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6fc9-account-create-htjrd" event={"ID":"d5e5671c-9388-46fd-b5f2-7c2bc71db709","Type":"ContainerStarted","Data":"c37225f54bfb16e577524db86b8ec8313d39e40d70f04269b6df6099b7d767d9"} Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.859011 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2lt6m" event={"ID":"51637b2d-14b5-4bb3-95ee-e2cafe7780e2","Type":"ContainerStarted","Data":"fc3ed0548e33b91eca3ac57d7f87f1f4c5af2f87c6a659680a6f3c10966a9ad0"} Nov 21 13:57:21 crc kubenswrapper[4675]: I1121 13:57:21.985169 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.427035 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxvbz"] Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.870188 4675 generic.go:334] "Generic (PLEG): container finished" podID="d5e5671c-9388-46fd-b5f2-7c2bc71db709" containerID="b5a0619eed438ef5b9b63dae0c70eef53f6c47850b0002414a5d7edd4e8383d2" exitCode=0 Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.870700 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6fc9-account-create-htjrd" event={"ID":"d5e5671c-9388-46fd-b5f2-7c2bc71db709","Type":"ContainerDied","Data":"b5a0619eed438ef5b9b63dae0c70eef53f6c47850b0002414a5d7edd4e8383d2"} Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.872605 4675 generic.go:334] "Generic (PLEG): container finished" podID="51637b2d-14b5-4bb3-95ee-e2cafe7780e2" containerID="a10c2f6749aca891fc816f61787aecbfd346faeeca768c86f85b7d1504f01496" exitCode=0 Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.872729 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2lt6m" event={"ID":"51637b2d-14b5-4bb3-95ee-e2cafe7780e2","Type":"ContainerDied","Data":"a10c2f6749aca891fc816f61787aecbfd346faeeca768c86f85b7d1504f01496"} Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.874373 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fft96" event={"ID":"a5a7071c-2a7a-431a-b580-a4f6038444b6","Type":"ContainerStarted","Data":"3d617bcfd750d2c29758c22e11f3f439fcc593aa559c6c917543d88ab1b936fe"} Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.876180 4675 generic.go:334] "Generic (PLEG): container finished" podID="c9a259f8-81fa-445f-b17b-9d12b0114a50" containerID="2a2174c5ed73f358043c151d81c55c29c2a00d4baf484e6e6a2636c0ccade629" exitCode=0 Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.876229 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-gf6k8" event={"ID":"c9a259f8-81fa-445f-b17b-9d12b0114a50","Type":"ContainerDied","Data":"2a2174c5ed73f358043c151d81c55c29c2a00d4baf484e6e6a2636c0ccade629"} Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.877693 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxvbz" event={"ID":"1d02a60a-11f5-447f-a27f-b4c6a7457c26","Type":"ContainerStarted","Data":"72c5e1a64f094933420405b17a56fa8838e6b3cc1aae84bb7082e61e19f3afd3"} Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.880215 4675 generic.go:334] "Generic (PLEG): container finished" podID="86850670-f736-42b5-87ed-147d7a572d73" containerID="5d9bce0667bbbb8900a4362644a545488eb7a4cda512f80404c74fd6dc385134" exitCode=0 Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.880269 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3930-account-create-nfddg" event={"ID":"86850670-f736-42b5-87ed-147d7a572d73","Type":"ContainerDied","Data":"5d9bce0667bbbb8900a4362644a545488eb7a4cda512f80404c74fd6dc385134"} Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.882175 4675 generic.go:334] "Generic (PLEG): container finished" podID="069dca18-b43c-4d0a-a088-bd2c11e2e8d8" containerID="8e93cc4a5b28b1c6d34e8d66813c02fb2d8fa0a457888588fb3a1e0345b70c2c" exitCode=0 Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.882268 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-822b-account-create-khlxc" event={"ID":"069dca18-b43c-4d0a-a088-bd2c11e2e8d8","Type":"ContainerDied","Data":"8e93cc4a5b28b1c6d34e8d66813c02fb2d8fa0a457888588fb3a1e0345b70c2c"} Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.885158 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"90806727c998129e7b0d86b28e5f98eff4ebc8253bc305b10a036bb2f109d2d9"} Nov 21 13:57:22 crc kubenswrapper[4675]: I1121 13:57:22.920160 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-fft96" podStartSLOduration=11.920140474 podStartE2EDuration="11.920140474s" podCreationTimestamp="2025-11-21 13:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:22.902994748 +0000 UTC m=+1519.629409475" watchObservedRunningTime="2025-11-21 13:57:22.920140474 +0000 UTC m=+1519.646555211" Nov 21 13:57:23 crc kubenswrapper[4675]: I1121 13:57:23.897910 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8d6qh" event={"ID":"8e21ce4f-da1d-4f89-8f41-6bb22c247d04","Type":"ContainerStarted","Data":"7ac16f0c69bc28cf9b0096e41a231efb4ec8780f387b78ba3cfd646fe897286d"} Nov 21 13:57:23 crc kubenswrapper[4675]: I1121 13:57:23.900527 4675 generic.go:334] "Generic (PLEG): container finished" podID="a5a7071c-2a7a-431a-b580-a4f6038444b6" containerID="3d617bcfd750d2c29758c22e11f3f439fcc593aa559c6c917543d88ab1b936fe" exitCode=0 Nov 21 13:57:23 crc kubenswrapper[4675]: I1121 13:57:23.900642 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fft96" event={"ID":"a5a7071c-2a7a-431a-b580-a4f6038444b6","Type":"ContainerDied","Data":"3d617bcfd750d2c29758c22e11f3f439fcc593aa559c6c917543d88ab1b936fe"} Nov 21 13:57:23 crc kubenswrapper[4675]: I1121 13:57:23.903275 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"dddea1da-ad44-4caa-8719-55dea099d456","Type":"ContainerStarted","Data":"3216aa07f79f465918a0406038d043d807a151fc6b9aa53791c52762e6776dfa"} Nov 21 13:57:23 crc kubenswrapper[4675]: I1121 13:57:23.905328 4675 generic.go:334] "Generic (PLEG): container finished" podID="1d02a60a-11f5-447f-a27f-b4c6a7457c26" containerID="b6f46e802b85c34405e9fca18d1eabaa11acfd3289e050ec71b892933722183c" exitCode=0 Nov 21 13:57:23 crc kubenswrapper[4675]: I1121 13:57:23.905383 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxvbz" event={"ID":"1d02a60a-11f5-447f-a27f-b4c6a7457c26","Type":"ContainerDied","Data":"b6f46e802b85c34405e9fca18d1eabaa11acfd3289e050ec71b892933722183c"} Nov 21 13:57:23 crc kubenswrapper[4675]: I1121 13:57:23.907463 4675 generic.go:334] "Generic (PLEG): container finished" podID="85230bf1-fd16-4f24-9f1f-13d6a960db2f" containerID="ec2d13b0fc8869495c1c60be2c0e3b807628a3cb36567765d9980feed0ab3faf" exitCode=0 Nov 21 13:57:23 crc kubenswrapper[4675]: I1121 13:57:23.907507 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-szwd2" event={"ID":"85230bf1-fd16-4f24-9f1f-13d6a960db2f","Type":"ContainerDied","Data":"ec2d13b0fc8869495c1c60be2c0e3b807628a3cb36567765d9980feed0ab3faf"} Nov 21 13:57:23 crc kubenswrapper[4675]: I1121 13:57:23.909593 4675 generic.go:334] "Generic (PLEG): container finished" podID="106316ae-7bc1-4693-a06d-9d93516af3a4" containerID="fac5d61a0b3deccc88c8251f86febb417303645964ed1a5d65724ad542e0e78a" exitCode=0 Nov 21 13:57:23 crc kubenswrapper[4675]: I1121 13:57:23.909742 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2480-account-create-94rfs" event={"ID":"106316ae-7bc1-4693-a06d-9d93516af3a4","Type":"ContainerDied","Data":"fac5d61a0b3deccc88c8251f86febb417303645964ed1a5d65724ad542e0e78a"} Nov 21 13:57:23 crc kubenswrapper[4675]: I1121 13:57:23.924106 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8d6qh" podStartSLOduration=8.377138355 podStartE2EDuration="24.92408195s" podCreationTimestamp="2025-11-21 13:56:59 +0000 UTC" firstStartedPulling="2025-11-21 13:57:00.955431086 +0000 UTC m=+1497.681845813" lastFinishedPulling="2025-11-21 13:57:17.502374681 +0000 UTC m=+1514.228789408" observedRunningTime="2025-11-21 13:57:23.916559643 +0000 UTC m=+1520.642974370" watchObservedRunningTime="2025-11-21 13:57:23.92408195 +0000 UTC m=+1520.650496677" Nov 21 13:57:23 crc kubenswrapper[4675]: I1121 13:57:23.963043 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.568075371 podStartE2EDuration="16.963018737s" podCreationTimestamp="2025-11-21 13:57:07 +0000 UTC" firstStartedPulling="2025-11-21 13:57:08.353870755 +0000 UTC m=+1505.080285482" lastFinishedPulling="2025-11-21 13:57:21.748814121 +0000 UTC m=+1518.475228848" observedRunningTime="2025-11-21 13:57:23.94743895 +0000 UTC m=+1520.673853677" watchObservedRunningTime="2025-11-21 13:57:23.963018737 +0000 UTC m=+1520.689433464" Nov 21 13:57:28 crc kubenswrapper[4675]: I1121 13:57:28.132059 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qpck5"] Nov 21 13:57:28 crc kubenswrapper[4675]: I1121 13:57:28.136968 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:28 crc kubenswrapper[4675]: I1121 13:57:28.159628 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qpck5"] Nov 21 13:57:28 crc kubenswrapper[4675]: I1121 13:57:28.246673 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-catalog-content\") pod \"certified-operators-qpck5\" (UID: \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\") " pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:28 crc kubenswrapper[4675]: I1121 13:57:28.246814 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-utilities\") pod \"certified-operators-qpck5\" (UID: \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\") " pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:28 crc kubenswrapper[4675]: I1121 13:57:28.246866 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvwbh\" (UniqueName: \"kubernetes.io/projected/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-kube-api-access-hvwbh\") pod \"certified-operators-qpck5\" (UID: \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\") " pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:28 crc kubenswrapper[4675]: I1121 13:57:28.349174 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-utilities\") pod \"certified-operators-qpck5\" (UID: \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\") " pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:28 crc kubenswrapper[4675]: I1121 13:57:28.349268 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvwbh\" (UniqueName: \"kubernetes.io/projected/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-kube-api-access-hvwbh\") pod \"certified-operators-qpck5\" (UID: \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\") " pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:28 crc kubenswrapper[4675]: I1121 13:57:28.349375 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-catalog-content\") pod \"certified-operators-qpck5\" (UID: \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\") " pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:28 crc kubenswrapper[4675]: I1121 13:57:28.349905 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-catalog-content\") pod \"certified-operators-qpck5\" (UID: \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\") " pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:28 crc kubenswrapper[4675]: I1121 13:57:28.350155 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-utilities\") pod \"certified-operators-qpck5\" (UID: \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\") " pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:28 crc kubenswrapper[4675]: I1121 13:57:28.380043 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvwbh\" (UniqueName: \"kubernetes.io/projected/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-kube-api-access-hvwbh\") pod \"certified-operators-qpck5\" (UID: \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\") " pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:28 crc kubenswrapper[4675]: I1121 13:57:28.463690 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:30 crc kubenswrapper[4675]: I1121 13:57:30.977711 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79","Type":"ContainerStarted","Data":"b3d21ffdeb46cd5034788f6c283202b16eb8259b1f0ab2181927a834412c1ddc"} Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.346895 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2480-account-create-94rfs" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.377865 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6fc9-account-create-htjrd" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.417909 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3930-account-create-nfddg" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.427782 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-szwd2" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.439470 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fft96" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.452494 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2lt6m" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.482702 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gf6k8" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.484638 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-822b-account-create-khlxc" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.508575 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5e5671c-9388-46fd-b5f2-7c2bc71db709-operator-scripts\") pod \"d5e5671c-9388-46fd-b5f2-7c2bc71db709\" (UID: \"d5e5671c-9388-46fd-b5f2-7c2bc71db709\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.508687 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kvtg\" (UniqueName: \"kubernetes.io/projected/106316ae-7bc1-4693-a06d-9d93516af3a4-kube-api-access-7kvtg\") pod \"106316ae-7bc1-4693-a06d-9d93516af3a4\" (UID: \"106316ae-7bc1-4693-a06d-9d93516af3a4\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.508807 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/106316ae-7bc1-4693-a06d-9d93516af3a4-operator-scripts\") pod \"106316ae-7bc1-4693-a06d-9d93516af3a4\" (UID: \"106316ae-7bc1-4693-a06d-9d93516af3a4\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.508850 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lnzq\" (UniqueName: \"kubernetes.io/projected/d5e5671c-9388-46fd-b5f2-7c2bc71db709-kube-api-access-5lnzq\") pod \"d5e5671c-9388-46fd-b5f2-7c2bc71db709\" (UID: \"d5e5671c-9388-46fd-b5f2-7c2bc71db709\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.512346 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e5671c-9388-46fd-b5f2-7c2bc71db709-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5e5671c-9388-46fd-b5f2-7c2bc71db709" (UID: "d5e5671c-9388-46fd-b5f2-7c2bc71db709"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.512807 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/106316ae-7bc1-4693-a06d-9d93516af3a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "106316ae-7bc1-4693-a06d-9d93516af3a4" (UID: "106316ae-7bc1-4693-a06d-9d93516af3a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.517645 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e5671c-9388-46fd-b5f2-7c2bc71db709-kube-api-access-5lnzq" (OuterVolumeSpecName: "kube-api-access-5lnzq") pod "d5e5671c-9388-46fd-b5f2-7c2bc71db709" (UID: "d5e5671c-9388-46fd-b5f2-7c2bc71db709"). InnerVolumeSpecName "kube-api-access-5lnzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.519361 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/106316ae-7bc1-4693-a06d-9d93516af3a4-kube-api-access-7kvtg" (OuterVolumeSpecName: "kube-api-access-7kvtg") pod "106316ae-7bc1-4693-a06d-9d93516af3a4" (UID: "106316ae-7bc1-4693-a06d-9d93516af3a4"). InnerVolumeSpecName "kube-api-access-7kvtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.609044 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qpck5"] Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.610173 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n859l\" (UniqueName: \"kubernetes.io/projected/51637b2d-14b5-4bb3-95ee-e2cafe7780e2-kube-api-access-n859l\") pod \"51637b2d-14b5-4bb3-95ee-e2cafe7780e2\" (UID: \"51637b2d-14b5-4bb3-95ee-e2cafe7780e2\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.610226 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86850670-f736-42b5-87ed-147d7a572d73-operator-scripts\") pod \"86850670-f736-42b5-87ed-147d7a572d73\" (UID: \"86850670-f736-42b5-87ed-147d7a572d73\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.610281 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz24p\" (UniqueName: \"kubernetes.io/projected/069dca18-b43c-4d0a-a088-bd2c11e2e8d8-kube-api-access-lz24p\") pod \"069dca18-b43c-4d0a-a088-bd2c11e2e8d8\" (UID: \"069dca18-b43c-4d0a-a088-bd2c11e2e8d8\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.610818 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86850670-f736-42b5-87ed-147d7a572d73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86850670-f736-42b5-87ed-147d7a572d73" (UID: "86850670-f736-42b5-87ed-147d7a572d73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.610942 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mlsx\" (UniqueName: \"kubernetes.io/projected/85230bf1-fd16-4f24-9f1f-13d6a960db2f-kube-api-access-9mlsx\") pod \"85230bf1-fd16-4f24-9f1f-13d6a960db2f\" (UID: \"85230bf1-fd16-4f24-9f1f-13d6a960db2f\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.611010 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a259f8-81fa-445f-b17b-9d12b0114a50-operator-scripts\") pod \"c9a259f8-81fa-445f-b17b-9d12b0114a50\" (UID: \"c9a259f8-81fa-445f-b17b-9d12b0114a50\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.611037 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpzp5\" (UniqueName: \"kubernetes.io/projected/86850670-f736-42b5-87ed-147d7a572d73-kube-api-access-rpzp5\") pod \"86850670-f736-42b5-87ed-147d7a572d73\" (UID: \"86850670-f736-42b5-87ed-147d7a572d73\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.611447 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a259f8-81fa-445f-b17b-9d12b0114a50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9a259f8-81fa-445f-b17b-9d12b0114a50" (UID: "c9a259f8-81fa-445f-b17b-9d12b0114a50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.611514 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8wlk\" (UniqueName: \"kubernetes.io/projected/c9a259f8-81fa-445f-b17b-9d12b0114a50-kube-api-access-v8wlk\") pod \"c9a259f8-81fa-445f-b17b-9d12b0114a50\" (UID: \"c9a259f8-81fa-445f-b17b-9d12b0114a50\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.611562 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51637b2d-14b5-4bb3-95ee-e2cafe7780e2-operator-scripts\") pod \"51637b2d-14b5-4bb3-95ee-e2cafe7780e2\" (UID: \"51637b2d-14b5-4bb3-95ee-e2cafe7780e2\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.611600 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wj42\" (UniqueName: \"kubernetes.io/projected/a5a7071c-2a7a-431a-b580-a4f6038444b6-kube-api-access-9wj42\") pod \"a5a7071c-2a7a-431a-b580-a4f6038444b6\" (UID: \"a5a7071c-2a7a-431a-b580-a4f6038444b6\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.611704 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a7071c-2a7a-431a-b580-a4f6038444b6-operator-scripts\") pod \"a5a7071c-2a7a-431a-b580-a4f6038444b6\" (UID: \"a5a7071c-2a7a-431a-b580-a4f6038444b6\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.611776 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/069dca18-b43c-4d0a-a088-bd2c11e2e8d8-operator-scripts\") pod \"069dca18-b43c-4d0a-a088-bd2c11e2e8d8\" (UID: \"069dca18-b43c-4d0a-a088-bd2c11e2e8d8\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.611805 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85230bf1-fd16-4f24-9f1f-13d6a960db2f-operator-scripts\") pod \"85230bf1-fd16-4f24-9f1f-13d6a960db2f\" (UID: \"85230bf1-fd16-4f24-9f1f-13d6a960db2f\") " Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.612480 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a259f8-81fa-445f-b17b-9d12b0114a50-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.612503 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5e5671c-9388-46fd-b5f2-7c2bc71db709-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.612512 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kvtg\" (UniqueName: \"kubernetes.io/projected/106316ae-7bc1-4693-a06d-9d93516af3a4-kube-api-access-7kvtg\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.612522 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86850670-f736-42b5-87ed-147d7a572d73-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.612530 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/106316ae-7bc1-4693-a06d-9d93516af3a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.612539 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lnzq\" (UniqueName: \"kubernetes.io/projected/d5e5671c-9388-46fd-b5f2-7c2bc71db709-kube-api-access-5lnzq\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.612948 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85230bf1-fd16-4f24-9f1f-13d6a960db2f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85230bf1-fd16-4f24-9f1f-13d6a960db2f" (UID: "85230bf1-fd16-4f24-9f1f-13d6a960db2f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.614741 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86850670-f736-42b5-87ed-147d7a572d73-kube-api-access-rpzp5" (OuterVolumeSpecName: "kube-api-access-rpzp5") pod "86850670-f736-42b5-87ed-147d7a572d73" (UID: "86850670-f736-42b5-87ed-147d7a572d73"). InnerVolumeSpecName "kube-api-access-rpzp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.614862 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51637b2d-14b5-4bb3-95ee-e2cafe7780e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51637b2d-14b5-4bb3-95ee-e2cafe7780e2" (UID: "51637b2d-14b5-4bb3-95ee-e2cafe7780e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.615518 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/069dca18-b43c-4d0a-a088-bd2c11e2e8d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "069dca18-b43c-4d0a-a088-bd2c11e2e8d8" (UID: "069dca18-b43c-4d0a-a088-bd2c11e2e8d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.615575 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a7071c-2a7a-431a-b580-a4f6038444b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5a7071c-2a7a-431a-b580-a4f6038444b6" (UID: "a5a7071c-2a7a-431a-b580-a4f6038444b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.616179 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51637b2d-14b5-4bb3-95ee-e2cafe7780e2-kube-api-access-n859l" (OuterVolumeSpecName: "kube-api-access-n859l") pod "51637b2d-14b5-4bb3-95ee-e2cafe7780e2" (UID: "51637b2d-14b5-4bb3-95ee-e2cafe7780e2"). InnerVolumeSpecName "kube-api-access-n859l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.617404 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85230bf1-fd16-4f24-9f1f-13d6a960db2f-kube-api-access-9mlsx" (OuterVolumeSpecName: "kube-api-access-9mlsx") pod "85230bf1-fd16-4f24-9f1f-13d6a960db2f" (UID: "85230bf1-fd16-4f24-9f1f-13d6a960db2f"). InnerVolumeSpecName "kube-api-access-9mlsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.618010 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069dca18-b43c-4d0a-a088-bd2c11e2e8d8-kube-api-access-lz24p" (OuterVolumeSpecName: "kube-api-access-lz24p") pod "069dca18-b43c-4d0a-a088-bd2c11e2e8d8" (UID: "069dca18-b43c-4d0a-a088-bd2c11e2e8d8"). InnerVolumeSpecName "kube-api-access-lz24p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.628927 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5a7071c-2a7a-431a-b580-a4f6038444b6-kube-api-access-9wj42" (OuterVolumeSpecName: "kube-api-access-9wj42") pod "a5a7071c-2a7a-431a-b580-a4f6038444b6" (UID: "a5a7071c-2a7a-431a-b580-a4f6038444b6"). InnerVolumeSpecName "kube-api-access-9wj42". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.629206 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9a259f8-81fa-445f-b17b-9d12b0114a50-kube-api-access-v8wlk" (OuterVolumeSpecName: "kube-api-access-v8wlk") pod "c9a259f8-81fa-445f-b17b-9d12b0114a50" (UID: "c9a259f8-81fa-445f-b17b-9d12b0114a50"). InnerVolumeSpecName "kube-api-access-v8wlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.713870 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8wlk\" (UniqueName: \"kubernetes.io/projected/c9a259f8-81fa-445f-b17b-9d12b0114a50-kube-api-access-v8wlk\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.713903 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51637b2d-14b5-4bb3-95ee-e2cafe7780e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.713916 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wj42\" (UniqueName: \"kubernetes.io/projected/a5a7071c-2a7a-431a-b580-a4f6038444b6-kube-api-access-9wj42\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.713927 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a7071c-2a7a-431a-b580-a4f6038444b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.713940 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/069dca18-b43c-4d0a-a088-bd2c11e2e8d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.713951 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85230bf1-fd16-4f24-9f1f-13d6a960db2f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.713962 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n859l\" (UniqueName: \"kubernetes.io/projected/51637b2d-14b5-4bb3-95ee-e2cafe7780e2-kube-api-access-n859l\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.713974 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz24p\" (UniqueName: \"kubernetes.io/projected/069dca18-b43c-4d0a-a088-bd2c11e2e8d8-kube-api-access-lz24p\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.713989 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mlsx\" (UniqueName: \"kubernetes.io/projected/85230bf1-fd16-4f24-9f1f-13d6a960db2f-kube-api-access-9mlsx\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.714000 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpzp5\" (UniqueName: \"kubernetes.io/projected/86850670-f736-42b5-87ed-147d7a572d73-kube-api-access-rpzp5\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:31 crc kubenswrapper[4675]: E1121 13:57:31.821519 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01b5c9cb_9a2f_471c_894d_349dd2f1ad8a.slice/crio-conmon-31b4590c60d7c8bcc2eb70ed3bf537128cc4c00975d487d3c03051bb3f8d4df5.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.993141 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3930-account-create-nfddg" event={"ID":"86850670-f736-42b5-87ed-147d7a572d73","Type":"ContainerDied","Data":"a76de87ec99ad260a2fd28fdde529ee797214a01eb01a55ca1b81c5d8eef1c76"} Nov 21 13:57:31 crc kubenswrapper[4675]: I1121 13:57:31.993180 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a76de87ec99ad260a2fd28fdde529ee797214a01eb01a55ca1b81c5d8eef1c76" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:31.993246 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3930-account-create-nfddg" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:31.998301 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wrfwv" event={"ID":"8f97708f-a116-4805-b085-d887c811b56a","Type":"ContainerStarted","Data":"ecc9dafcfd46069f8d80d1325eabea0071123d924cda7062d60f64f8265eafad"} Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.001939 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fft96" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.001952 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fft96" event={"ID":"a5a7071c-2a7a-431a-b580-a4f6038444b6","Type":"ContainerDied","Data":"2cd6add43e9501f698f869dae6c9e3801e58042ea33bdbc2214718cdfe03cdd3"} Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.001979 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cd6add43e9501f698f869dae6c9e3801e58042ea33bdbc2214718cdfe03cdd3" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.004626 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-gf6k8" event={"ID":"c9a259f8-81fa-445f-b17b-9d12b0114a50","Type":"ContainerDied","Data":"e953584fc1439632bcc512a27d2376c5688146728a901c133fb60c1bb063a8f4"} Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.004661 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e953584fc1439632bcc512a27d2376c5688146728a901c133fb60c1bb063a8f4" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.004725 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gf6k8" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.019795 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-szwd2" event={"ID":"85230bf1-fd16-4f24-9f1f-13d6a960db2f","Type":"ContainerDied","Data":"5b793c7c485552d86da3663df950e95bdca1e307ae7c7a6c73a7f68077537b3a"} Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.019843 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b793c7c485552d86da3663df950e95bdca1e307ae7c7a6c73a7f68077537b3a" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.020121 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-szwd2" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.021051 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-822b-account-create-khlxc" event={"ID":"069dca18-b43c-4d0a-a088-bd2c11e2e8d8","Type":"ContainerDied","Data":"9b8e911e54745332ff8ddc6eebbf49cac594e86cb769cd9442e6c074bdda4080"} Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.021099 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b8e911e54745332ff8ddc6eebbf49cac594e86cb769cd9442e6c074bdda4080" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.021380 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-822b-account-create-khlxc" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.023152 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"85a39a1d12f42a5c20b78e43f2ba3c90c2cec6c665b40b148a105923fb8e5965"} Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.023206 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"7d8db0f7b656068b5ddfb8cae3d987bc60c69a6050a2b4adb62cbf9f7ae77e5b"} Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.025721 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79","Type":"ContainerStarted","Data":"0643ba6fcbef24bc12dde0bd0b6146d13bad959771848e61a286c24f687c58fa"} Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.029365 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-wrfwv" podStartSLOduration=9.980282771 podStartE2EDuration="20.029307763s" podCreationTimestamp="2025-11-21 13:57:12 +0000 UTC" firstStartedPulling="2025-11-21 13:57:21.085994549 +0000 UTC m=+1517.812409276" lastFinishedPulling="2025-11-21 13:57:31.135019541 +0000 UTC m=+1527.861434268" observedRunningTime="2025-11-21 13:57:32.014769942 +0000 UTC m=+1528.741184669" watchObservedRunningTime="2025-11-21 13:57:32.029307763 +0000 UTC m=+1528.755722490" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.031118 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6fc9-account-create-htjrd" event={"ID":"d5e5671c-9388-46fd-b5f2-7c2bc71db709","Type":"ContainerDied","Data":"c37225f54bfb16e577524db86b8ec8313d39e40d70f04269b6df6099b7d767d9"} Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.031170 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c37225f54bfb16e577524db86b8ec8313d39e40d70f04269b6df6099b7d767d9" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.032962 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6fc9-account-create-htjrd" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.033075 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2lt6m" event={"ID":"51637b2d-14b5-4bb3-95ee-e2cafe7780e2","Type":"ContainerDied","Data":"fc3ed0548e33b91eca3ac57d7f87f1f4c5af2f87c6a659680a6f3c10966a9ad0"} Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.033123 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc3ed0548e33b91eca3ac57d7f87f1f4c5af2f87c6a659680a6f3c10966a9ad0" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.033552 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2lt6m" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.034835 4675 generic.go:334] "Generic (PLEG): container finished" podID="01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" containerID="31b4590c60d7c8bcc2eb70ed3bf537128cc4c00975d487d3c03051bb3f8d4df5" exitCode=0 Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.034888 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpck5" event={"ID":"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a","Type":"ContainerDied","Data":"31b4590c60d7c8bcc2eb70ed3bf537128cc4c00975d487d3c03051bb3f8d4df5"} Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.034908 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpck5" event={"ID":"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a","Type":"ContainerStarted","Data":"dac4eb45e0f37a0810ca08c55442bf524166b62d466b809c6693f46e90dff6a1"} Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.039258 4675 generic.go:334] "Generic (PLEG): container finished" podID="1d02a60a-11f5-447f-a27f-b4c6a7457c26" containerID="00a08ac972fcea68d3dac4670705202599d150b98103bb8f36142b59a3d17cf4" exitCode=0 Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.039312 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxvbz" event={"ID":"1d02a60a-11f5-447f-a27f-b4c6a7457c26","Type":"ContainerDied","Data":"00a08ac972fcea68d3dac4670705202599d150b98103bb8f36142b59a3d17cf4"} Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.044946 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2480-account-create-94rfs" event={"ID":"106316ae-7bc1-4693-a06d-9d93516af3a4","Type":"ContainerDied","Data":"fc3c2d011afeee02df5aa6109b7a96de01564caa7efbf5a9220332d6d2b66a87"} Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.045003 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc3c2d011afeee02df5aa6109b7a96de01564caa7efbf5a9220332d6d2b66a87" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.045110 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2480-account-create-94rfs" Nov 21 13:57:32 crc kubenswrapper[4675]: I1121 13:57:32.053879 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=30.053861153 podStartE2EDuration="30.053861153s" podCreationTimestamp="2025-11-21 13:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:32.052994971 +0000 UTC m=+1528.779409698" watchObservedRunningTime="2025-11-21 13:57:32.053861153 +0000 UTC m=+1528.780275880" Nov 21 13:57:33 crc kubenswrapper[4675]: I1121 13:57:33.056671 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"352eaad92e57973e9a86ced380a49fd9b8470663b64262e66a8f8870e7373758"} Nov 21 13:57:33 crc kubenswrapper[4675]: I1121 13:57:33.057335 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"449391e88ecd1ad0ce45d6e2fd21ea06633620cfde475c72a8b7a8588ba42ecf"} Nov 21 13:57:33 crc kubenswrapper[4675]: I1121 13:57:33.334316 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:33 crc kubenswrapper[4675]: I1121 13:57:33.334378 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:33 crc kubenswrapper[4675]: I1121 13:57:33.344472 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:34 crc kubenswrapper[4675]: I1121 13:57:34.070258 4675 generic.go:334] "Generic (PLEG): container finished" podID="01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" containerID="71769ae5d6b84e6e7e82d990e1af5cd36eaa51a9ef6b719499613aec26f856b6" exitCode=0 Nov 21 13:57:34 crc kubenswrapper[4675]: I1121 13:57:34.070406 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpck5" event={"ID":"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a","Type":"ContainerDied","Data":"71769ae5d6b84e6e7e82d990e1af5cd36eaa51a9ef6b719499613aec26f856b6"} Nov 21 13:57:34 crc kubenswrapper[4675]: I1121 13:57:34.075551 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxvbz" event={"ID":"1d02a60a-11f5-447f-a27f-b4c6a7457c26","Type":"ContainerStarted","Data":"a626cde9fcd740b77fd7ae6d7b843db397624a2a668b8dfcdc5b6648f12b584c"} Nov 21 13:57:34 crc kubenswrapper[4675]: I1121 13:57:34.081456 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 21 13:57:34 crc kubenswrapper[4675]: I1121 13:57:34.180120 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pxvbz" podStartSLOduration=10.943523662 podStartE2EDuration="13.180101353s" podCreationTimestamp="2025-11-21 13:57:21 +0000 UTC" firstStartedPulling="2025-11-21 13:57:30.952949738 +0000 UTC m=+1527.679364475" lastFinishedPulling="2025-11-21 13:57:33.189527449 +0000 UTC m=+1529.915942166" observedRunningTime="2025-11-21 13:57:34.158581958 +0000 UTC m=+1530.884996705" watchObservedRunningTime="2025-11-21 13:57:34.180101353 +0000 UTC m=+1530.906516080" Nov 21 13:57:36 crc kubenswrapper[4675]: I1121 13:57:36.098194 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpck5" event={"ID":"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a","Type":"ContainerStarted","Data":"08fb1d4b15c1e7375f06808ccb0990abd2567d630091bb0fd41d8a07ccea552f"} Nov 21 13:57:36 crc kubenswrapper[4675]: I1121 13:57:36.106872 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"efb3f7039ea5f51746bd3b21f860cb4ec981dfb37a2335a1b9c09a94ba8b9593"} Nov 21 13:57:36 crc kubenswrapper[4675]: I1121 13:57:36.106926 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"2754091329f482217cf023424965105bb0b973f89b1f5731ace2c98326dcd4f7"} Nov 21 13:57:36 crc kubenswrapper[4675]: I1121 13:57:36.106943 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"bf32bef206a31f0bc9cdbb7a1dcbe3f0441f7e43cd3eefc45e9e4763887a3134"} Nov 21 13:57:36 crc kubenswrapper[4675]: I1121 13:57:36.106959 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"bd8edeabcd5e1e1adc3e96db4496fa32b86940d5899748cc8ea7cc5b12bc47b7"} Nov 21 13:57:36 crc kubenswrapper[4675]: I1121 13:57:36.124537 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qpck5" podStartSLOduration=5.11992346 podStartE2EDuration="8.124516106s" podCreationTimestamp="2025-11-21 13:57:28 +0000 UTC" firstStartedPulling="2025-11-21 13:57:32.03726026 +0000 UTC m=+1528.763674987" lastFinishedPulling="2025-11-21 13:57:35.041852906 +0000 UTC m=+1531.768267633" observedRunningTime="2025-11-21 13:57:36.119899982 +0000 UTC m=+1532.846314709" watchObservedRunningTime="2025-11-21 13:57:36.124516106 +0000 UTC m=+1532.850930833" Nov 21 13:57:38 crc kubenswrapper[4675]: I1121 13:57:38.126057 4675 generic.go:334] "Generic (PLEG): container finished" podID="8f97708f-a116-4805-b085-d887c811b56a" containerID="ecc9dafcfd46069f8d80d1325eabea0071123d924cda7062d60f64f8265eafad" exitCode=0 Nov 21 13:57:38 crc kubenswrapper[4675]: I1121 13:57:38.126325 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wrfwv" event={"ID":"8f97708f-a116-4805-b085-d887c811b56a","Type":"ContainerDied","Data":"ecc9dafcfd46069f8d80d1325eabea0071123d924cda7062d60f64f8265eafad"} Nov 21 13:57:38 crc kubenswrapper[4675]: I1121 13:57:38.134533 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"82f23623f8b02ff19bc39dd00a03f5b343b86563cb9920dbd9ba6a5504110753"} Nov 21 13:57:38 crc kubenswrapper[4675]: I1121 13:57:38.134580 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"38dc80726120f20dd55628a4e949cdb3e0df54e071b0b823a8c7035137db9c57"} Nov 21 13:57:38 crc kubenswrapper[4675]: I1121 13:57:38.134594 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"f072c5d9fa9625bd583e21af5f61ab4b2da5faabc20a0bce3c96f6f0be73763a"} Nov 21 13:57:38 crc kubenswrapper[4675]: I1121 13:57:38.464678 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:38 crc kubenswrapper[4675]: I1121 13:57:38.464970 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:38 crc kubenswrapper[4675]: I1121 13:57:38.570872 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.149234 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"e2f2d89c71eb9e9a1fbe108635bdbd91269a760d36b3eb6ec4d021c5da6a7afd"} Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.149578 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"41a6723ba6269dd826cd9428bda076d3107c4b71a0e08ec694bd77dd28b1ccf8"} Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.149589 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"001ede45014394b64e2a521f9c2163d77f1794a08aef3c1566d0313adcd19a9d"} Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.149598 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29cc3528-47d5-4479-85fc-37f8e53f1caf","Type":"ContainerStarted","Data":"d8ac05234d1435ab14f02c5792fdbf3378b09b64600ae14fd04c374527493eb2"} Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.186140 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.038004122 podStartE2EDuration="53.186117192s" podCreationTimestamp="2025-11-21 13:56:46 +0000 UTC" firstStartedPulling="2025-11-21 13:57:22.195757432 +0000 UTC m=+1518.922172149" lastFinishedPulling="2025-11-21 13:57:37.343870492 +0000 UTC m=+1534.070285219" observedRunningTime="2025-11-21 13:57:39.180717738 +0000 UTC m=+1535.907132475" watchObservedRunningTime="2025-11-21 13:57:39.186117192 +0000 UTC m=+1535.912531919" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.539310 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zzn4m"] Nov 21 13:57:39 crc kubenswrapper[4675]: E1121 13:57:39.540163 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106316ae-7bc1-4693-a06d-9d93516af3a4" containerName="mariadb-account-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540182 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="106316ae-7bc1-4693-a06d-9d93516af3a4" containerName="mariadb-account-create" Nov 21 13:57:39 crc kubenswrapper[4675]: E1121 13:57:39.540204 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86850670-f736-42b5-87ed-147d7a572d73" containerName="mariadb-account-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540211 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="86850670-f736-42b5-87ed-147d7a572d73" containerName="mariadb-account-create" Nov 21 13:57:39 crc kubenswrapper[4675]: E1121 13:57:39.540233 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e5671c-9388-46fd-b5f2-7c2bc71db709" containerName="mariadb-account-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540241 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e5671c-9388-46fd-b5f2-7c2bc71db709" containerName="mariadb-account-create" Nov 21 13:57:39 crc kubenswrapper[4675]: E1121 13:57:39.540250 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a259f8-81fa-445f-b17b-9d12b0114a50" containerName="mariadb-database-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540257 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a259f8-81fa-445f-b17b-9d12b0114a50" containerName="mariadb-database-create" Nov 21 13:57:39 crc kubenswrapper[4675]: E1121 13:57:39.540272 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a7071c-2a7a-431a-b580-a4f6038444b6" containerName="mariadb-database-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540281 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a7071c-2a7a-431a-b580-a4f6038444b6" containerName="mariadb-database-create" Nov 21 13:57:39 crc kubenswrapper[4675]: E1121 13:57:39.540294 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51637b2d-14b5-4bb3-95ee-e2cafe7780e2" containerName="mariadb-database-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540302 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="51637b2d-14b5-4bb3-95ee-e2cafe7780e2" containerName="mariadb-database-create" Nov 21 13:57:39 crc kubenswrapper[4675]: E1121 13:57:39.540337 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85230bf1-fd16-4f24-9f1f-13d6a960db2f" containerName="mariadb-database-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540344 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="85230bf1-fd16-4f24-9f1f-13d6a960db2f" containerName="mariadb-database-create" Nov 21 13:57:39 crc kubenswrapper[4675]: E1121 13:57:39.540357 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069dca18-b43c-4d0a-a088-bd2c11e2e8d8" containerName="mariadb-account-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540364 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="069dca18-b43c-4d0a-a088-bd2c11e2e8d8" containerName="mariadb-account-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540626 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a259f8-81fa-445f-b17b-9d12b0114a50" containerName="mariadb-database-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540649 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="51637b2d-14b5-4bb3-95ee-e2cafe7780e2" containerName="mariadb-database-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540667 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="106316ae-7bc1-4693-a06d-9d93516af3a4" containerName="mariadb-account-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540681 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="069dca18-b43c-4d0a-a088-bd2c11e2e8d8" containerName="mariadb-account-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540695 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="85230bf1-fd16-4f24-9f1f-13d6a960db2f" containerName="mariadb-database-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540704 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="86850670-f736-42b5-87ed-147d7a572d73" containerName="mariadb-account-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540716 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e5671c-9388-46fd-b5f2-7c2bc71db709" containerName="mariadb-account-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.540736 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5a7071c-2a7a-431a-b580-a4f6038444b6" containerName="mariadb-database-create" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.542204 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.547295 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.555968 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zzn4m"] Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.608396 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wrfwv" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.699748 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f97708f-a116-4805-b085-d887c811b56a-combined-ca-bundle\") pod \"8f97708f-a116-4805-b085-d887c811b56a\" (UID: \"8f97708f-a116-4805-b085-d887c811b56a\") " Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.699856 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtvlj\" (UniqueName: \"kubernetes.io/projected/8f97708f-a116-4805-b085-d887c811b56a-kube-api-access-mtvlj\") pod \"8f97708f-a116-4805-b085-d887c811b56a\" (UID: \"8f97708f-a116-4805-b085-d887c811b56a\") " Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.699999 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f97708f-a116-4805-b085-d887c811b56a-config-data\") pod \"8f97708f-a116-4805-b085-d887c811b56a\" (UID: \"8f97708f-a116-4805-b085-d887c811b56a\") " Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.700368 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.700400 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.700446 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb7q2\" (UniqueName: \"kubernetes.io/projected/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-kube-api-access-bb7q2\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.700554 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-config\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.700585 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.700602 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.705118 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f97708f-a116-4805-b085-d887c811b56a-kube-api-access-mtvlj" (OuterVolumeSpecName: "kube-api-access-mtvlj") pod "8f97708f-a116-4805-b085-d887c811b56a" (UID: "8f97708f-a116-4805-b085-d887c811b56a"). InnerVolumeSpecName "kube-api-access-mtvlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.730489 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f97708f-a116-4805-b085-d887c811b56a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f97708f-a116-4805-b085-d887c811b56a" (UID: "8f97708f-a116-4805-b085-d887c811b56a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.753487 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f97708f-a116-4805-b085-d887c811b56a-config-data" (OuterVolumeSpecName: "config-data") pod "8f97708f-a116-4805-b085-d887c811b56a" (UID: "8f97708f-a116-4805-b085-d887c811b56a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.802221 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-config\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.802271 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.802294 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.802365 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.802385 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.802429 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb7q2\" (UniqueName: \"kubernetes.io/projected/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-kube-api-access-bb7q2\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.802540 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtvlj\" (UniqueName: \"kubernetes.io/projected/8f97708f-a116-4805-b085-d887c811b56a-kube-api-access-mtvlj\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.802555 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f97708f-a116-4805-b085-d887c811b56a-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.802566 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f97708f-a116-4805-b085-d887c811b56a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.803365 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.803375 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-config\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.803461 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.803757 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.804204 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.821817 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb7q2\" (UniqueName: \"kubernetes.io/projected/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-kube-api-access-bb7q2\") pod \"dnsmasq-dns-5c79d794d7-zzn4m\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:39 crc kubenswrapper[4675]: I1121 13:57:39.938773 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.174192 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wrfwv" event={"ID":"8f97708f-a116-4805-b085-d887c811b56a","Type":"ContainerDied","Data":"8d0904e0cc96f39719c02273a15acb91d4a41b24139bb5fb7b0544194bb80e29"} Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.174541 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d0904e0cc96f39719c02273a15acb91d4a41b24139bb5fb7b0544194bb80e29" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.174255 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wrfwv" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.374303 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zzn4m"] Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.425953 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-skvhr"] Nov 21 13:57:40 crc kubenswrapper[4675]: E1121 13:57:40.426626 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f97708f-a116-4805-b085-d887c811b56a" containerName="keystone-db-sync" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.426648 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f97708f-a116-4805-b085-d887c811b56a" containerName="keystone-db-sync" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.426910 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f97708f-a116-4805-b085-d887c811b56a" containerName="keystone-db-sync" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.427839 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.442182 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.442441 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.442601 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.442755 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.442931 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tzj4r" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.443630 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-8wkr9"] Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.445985 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.454656 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-8wkr9"] Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.478254 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-skvhr"] Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.508026 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zzn4m"] Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.517939 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-scripts\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.518082 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-combined-ca-bundle\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.518163 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-config\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.518212 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.518241 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-config-data\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.518445 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd6hp\" (UniqueName: \"kubernetes.io/projected/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-kube-api-access-gd6hp\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.518543 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.518594 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-dns-svc\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.518624 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-fernet-keys\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.518677 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-n65cw"] Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.518716 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.518777 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgvv2\" (UniqueName: \"kubernetes.io/projected/1c814a96-9fce-4e41-a875-749acc27ecd6-kube-api-access-hgvv2\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.518808 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-credential-keys\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.523437 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-n65cw" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.526567 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.527358 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-wcdqh" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.534371 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-n65cw"] Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.621892 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd6hp\" (UniqueName: \"kubernetes.io/projected/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-kube-api-access-gd6hp\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.621973 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.622020 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-dns-svc\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.622060 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-fernet-keys\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.622109 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.622142 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baacdfb7-787a-462a-8102-472a47283224-combined-ca-bundle\") pod \"heat-db-sync-n65cw\" (UID: \"baacdfb7-787a-462a-8102-472a47283224\") " pod="openstack/heat-db-sync-n65cw" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.622181 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgvv2\" (UniqueName: \"kubernetes.io/projected/1c814a96-9fce-4e41-a875-749acc27ecd6-kube-api-access-hgvv2\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.622208 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmcs5\" (UniqueName: \"kubernetes.io/projected/baacdfb7-787a-462a-8102-472a47283224-kube-api-access-fmcs5\") pod \"heat-db-sync-n65cw\" (UID: \"baacdfb7-787a-462a-8102-472a47283224\") " pod="openstack/heat-db-sync-n65cw" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.622236 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-credential-keys\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.622288 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-scripts\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.622339 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-combined-ca-bundle\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.622386 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-config\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.622411 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.622434 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-config-data\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.622461 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baacdfb7-787a-462a-8102-472a47283224-config-data\") pod \"heat-db-sync-n65cw\" (UID: \"baacdfb7-787a-462a-8102-472a47283224\") " pod="openstack/heat-db-sync-n65cw" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.624262 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.624531 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-dns-svc\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.624762 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.624959 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-config\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.626127 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.632164 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-scripts\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.638322 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-credential-keys\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.638826 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-fernet-keys\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.641247 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-combined-ca-bundle\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.645862 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-config-data\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.677576 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd6hp\" (UniqueName: \"kubernetes.io/projected/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-kube-api-access-gd6hp\") pod \"dnsmasq-dns-5b868669f-8wkr9\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.732426 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgvv2\" (UniqueName: \"kubernetes.io/projected/1c814a96-9fce-4e41-a875-749acc27ecd6-kube-api-access-hgvv2\") pod \"keystone-bootstrap-skvhr\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.733694 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baacdfb7-787a-462a-8102-472a47283224-combined-ca-bundle\") pod \"heat-db-sync-n65cw\" (UID: \"baacdfb7-787a-462a-8102-472a47283224\") " pod="openstack/heat-db-sync-n65cw" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.743486 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmcs5\" (UniqueName: \"kubernetes.io/projected/baacdfb7-787a-462a-8102-472a47283224-kube-api-access-fmcs5\") pod \"heat-db-sync-n65cw\" (UID: \"baacdfb7-787a-462a-8102-472a47283224\") " pod="openstack/heat-db-sync-n65cw" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.743804 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baacdfb7-787a-462a-8102-472a47283224-config-data\") pod \"heat-db-sync-n65cw\" (UID: \"baacdfb7-787a-462a-8102-472a47283224\") " pod="openstack/heat-db-sync-n65cw" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.744496 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baacdfb7-787a-462a-8102-472a47283224-combined-ca-bundle\") pod \"heat-db-sync-n65cw\" (UID: \"baacdfb7-787a-462a-8102-472a47283224\") " pod="openstack/heat-db-sync-n65cw" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.774990 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.795812 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.803904 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baacdfb7-787a-462a-8102-472a47283224-config-data\") pod \"heat-db-sync-n65cw\" (UID: \"baacdfb7-787a-462a-8102-472a47283224\") " pod="openstack/heat-db-sync-n65cw" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.811472 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-2jx6q"] Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.824242 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2jx6q" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.833331 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2bmhb" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.836203 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.838505 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.855563 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-config\") pod \"neutron-db-sync-2jx6q\" (UID: \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\") " pod="openstack/neutron-db-sync-2jx6q" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.855646 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rnvt\" (UniqueName: \"kubernetes.io/projected/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-kube-api-access-5rnvt\") pod \"neutron-db-sync-2jx6q\" (UID: \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\") " pod="openstack/neutron-db-sync-2jx6q" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.855669 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-combined-ca-bundle\") pod \"neutron-db-sync-2jx6q\" (UID: \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\") " pod="openstack/neutron-db-sync-2jx6q" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.865870 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmcs5\" (UniqueName: \"kubernetes.io/projected/baacdfb7-787a-462a-8102-472a47283224-kube-api-access-fmcs5\") pod \"heat-db-sync-n65cw\" (UID: \"baacdfb7-787a-462a-8102-472a47283224\") " pod="openstack/heat-db-sync-n65cw" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.917634 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2jx6q"] Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.937173 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-w28m5"] Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.938618 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.952907 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.953228 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.953379 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mkwll" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.976624 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtdfh\" (UniqueName: \"kubernetes.io/projected/d8406cb5-f871-4355-811c-7090afd8aa2e-kube-api-access-vtdfh\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.976666 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-config\") pod \"neutron-db-sync-2jx6q\" (UID: \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\") " pod="openstack/neutron-db-sync-2jx6q" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.976684 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8406cb5-f871-4355-811c-7090afd8aa2e-etc-machine-id\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.976726 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-combined-ca-bundle\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.976747 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rnvt\" (UniqueName: \"kubernetes.io/projected/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-kube-api-access-5rnvt\") pod \"neutron-db-sync-2jx6q\" (UID: \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\") " pod="openstack/neutron-db-sync-2jx6q" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.976768 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-combined-ca-bundle\") pod \"neutron-db-sync-2jx6q\" (UID: \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\") " pod="openstack/neutron-db-sync-2jx6q" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.976823 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-config-data\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.976844 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-db-sync-config-data\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:40 crc kubenswrapper[4675]: I1121 13:57:40.976902 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-scripts\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:40.992151 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-combined-ca-bundle\") pod \"neutron-db-sync-2jx6q\" (UID: \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\") " pod="openstack/neutron-db-sync-2jx6q" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.022785 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-config\") pod \"neutron-db-sync-2jx6q\" (UID: \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\") " pod="openstack/neutron-db-sync-2jx6q" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.022852 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-w28m5"] Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.060278 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rnvt\" (UniqueName: \"kubernetes.io/projected/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-kube-api-access-5rnvt\") pod \"neutron-db-sync-2jx6q\" (UID: \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\") " pod="openstack/neutron-db-sync-2jx6q" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.071981 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-g725b"] Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.073500 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.079648 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7sbnn" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.079904 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.080115 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.081244 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2jx6q" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.082765 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-config-data\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.082798 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-db-sync-config-data\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.082866 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-scripts\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.082934 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtdfh\" (UniqueName: \"kubernetes.io/projected/d8406cb5-f871-4355-811c-7090afd8aa2e-kube-api-access-vtdfh\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.082954 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8406cb5-f871-4355-811c-7090afd8aa2e-etc-machine-id\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.082989 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-combined-ca-bundle\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.087375 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8406cb5-f871-4355-811c-7090afd8aa2e-etc-machine-id\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.092735 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-scripts\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.093588 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-config-data\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.094430 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-db-sync-config-data\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.110106 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-combined-ca-bundle\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.130131 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-g725b"] Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.135347 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtdfh\" (UniqueName: \"kubernetes.io/projected/d8406cb5-f871-4355-811c-7090afd8aa2e-kube-api-access-vtdfh\") pod \"cinder-db-sync-w28m5\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.158616 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-n65cw" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.185563 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-config-data\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.185664 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kqp6\" (UniqueName: \"kubernetes.io/projected/b87d2cb3-6cf6-4f8e-ad16-021304428c63-kube-api-access-7kqp6\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.185738 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-scripts\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.185801 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-combined-ca-bundle\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.185876 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b87d2cb3-6cf6-4f8e-ad16-021304428c63-logs\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.221241 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-ljvgq"] Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.222699 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ljvgq" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.235625 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5zw64" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.236160 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.240195 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ljvgq"] Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.248250 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" event={"ID":"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f","Type":"ContainerStarted","Data":"421510a0a3996111dcd0a3652f8c639fd634e63f8af7dd5a2b198bcec1aa3bcf"} Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.267597 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-8wkr9"] Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.292260 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-config-data\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.292360 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kqp6\" (UniqueName: \"kubernetes.io/projected/b87d2cb3-6cf6-4f8e-ad16-021304428c63-kube-api-access-7kqp6\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.292389 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef50e12-86e6-4c25-b99e-4fc6506d3890-combined-ca-bundle\") pod \"barbican-db-sync-ljvgq\" (UID: \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\") " pod="openstack/barbican-db-sync-ljvgq" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.292422 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ef50e12-86e6-4c25-b99e-4fc6506d3890-db-sync-config-data\") pod \"barbican-db-sync-ljvgq\" (UID: \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\") " pod="openstack/barbican-db-sync-ljvgq" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.292476 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-scripts\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.292531 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-combined-ca-bundle\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.292572 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrm2g\" (UniqueName: \"kubernetes.io/projected/9ef50e12-86e6-4c25-b99e-4fc6506d3890-kube-api-access-jrm2g\") pod \"barbican-db-sync-ljvgq\" (UID: \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\") " pod="openstack/barbican-db-sync-ljvgq" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.292619 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b87d2cb3-6cf6-4f8e-ad16-021304428c63-logs\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.304434 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b87d2cb3-6cf6-4f8e-ad16-021304428c63-logs\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.334710 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-rbbch"] Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.336532 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.343635 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-scripts\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.343957 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-combined-ca-bundle\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.344547 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-config-data\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.347183 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kqp6\" (UniqueName: \"kubernetes.io/projected/b87d2cb3-6cf6-4f8e-ad16-021304428c63-kube-api-access-7kqp6\") pod \"placement-db-sync-g725b\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.377802 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-rbbch"] Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.394129 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrm2g\" (UniqueName: \"kubernetes.io/projected/9ef50e12-86e6-4c25-b99e-4fc6506d3890-kube-api-access-jrm2g\") pod \"barbican-db-sync-ljvgq\" (UID: \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\") " pod="openstack/barbican-db-sync-ljvgq" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.394182 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-dns-svc\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.394218 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.394262 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-config\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.394308 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.394361 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef50e12-86e6-4c25-b99e-4fc6506d3890-combined-ca-bundle\") pod \"barbican-db-sync-ljvgq\" (UID: \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\") " pod="openstack/barbican-db-sync-ljvgq" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.394378 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvh8r\" (UniqueName: \"kubernetes.io/projected/94e07f71-8bfd-45f4-b54f-775cbce3b611-kube-api-access-dvh8r\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.394402 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ef50e12-86e6-4c25-b99e-4fc6506d3890-db-sync-config-data\") pod \"barbican-db-sync-ljvgq\" (UID: \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\") " pod="openstack/barbican-db-sync-ljvgq" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.394432 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.399426 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.401018 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ef50e12-86e6-4c25-b99e-4fc6506d3890-db-sync-config-data\") pod \"barbican-db-sync-ljvgq\" (UID: \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\") " pod="openstack/barbican-db-sync-ljvgq" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.401681 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef50e12-86e6-4c25-b99e-4fc6506d3890-combined-ca-bundle\") pod \"barbican-db-sync-ljvgq\" (UID: \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\") " pod="openstack/barbican-db-sync-ljvgq" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.402736 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.409455 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.410236 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w28m5" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.410377 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.410685 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.416207 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrm2g\" (UniqueName: \"kubernetes.io/projected/9ef50e12-86e6-4c25-b99e-4fc6506d3890-kube-api-access-jrm2g\") pod \"barbican-db-sync-ljvgq\" (UID: \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\") " pod="openstack/barbican-db-sync-ljvgq" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.455532 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-g725b" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.495974 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-scripts\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.496041 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.496184 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvh8r\" (UniqueName: \"kubernetes.io/projected/94e07f71-8bfd-45f4-b54f-775cbce3b611-kube-api-access-dvh8r\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.496254 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.496298 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-config-data\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.496357 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch8mq\" (UniqueName: \"kubernetes.io/projected/87a33291-326b-4010-a851-ec2d41c8a754-kube-api-access-ch8mq\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.496475 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-dns-svc\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.496522 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.496548 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.496580 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.496648 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87a33291-326b-4010-a851-ec2d41c8a754-run-httpd\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.496681 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-config\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.496704 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87a33291-326b-4010-a851-ec2d41c8a754-log-httpd\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.508956 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.510025 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.511854 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-config\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.512153 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.513032 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-dns-svc\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.521792 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvh8r\" (UniqueName: \"kubernetes.io/projected/94e07f71-8bfd-45f4-b54f-775cbce3b611-kube-api-access-dvh8r\") pod \"dnsmasq-dns-cf78879c9-rbbch\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.570300 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ljvgq" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.598609 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87a33291-326b-4010-a851-ec2d41c8a754-run-httpd\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.598676 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87a33291-326b-4010-a851-ec2d41c8a754-log-httpd\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.598747 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-scripts\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.598868 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-config-data\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.598914 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch8mq\" (UniqueName: \"kubernetes.io/projected/87a33291-326b-4010-a851-ec2d41c8a754-kube-api-access-ch8mq\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.599020 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.599051 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.602221 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87a33291-326b-4010-a851-ec2d41c8a754-run-httpd\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.602458 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87a33291-326b-4010-a851-ec2d41c8a754-log-httpd\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.605894 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-scripts\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.607023 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.618825 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.626724 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-config-data\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.626919 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch8mq\" (UniqueName: \"kubernetes.io/projected/87a33291-326b-4010-a851-ec2d41c8a754-kube-api-access-ch8mq\") pod \"ceilometer-0\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.679742 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-skvhr"] Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.694111 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.694160 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:41 crc kubenswrapper[4675]: W1121 13:57:41.698286 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c814a96_9fce_4e41_a875_749acc27ecd6.slice/crio-69971148b579b659d7b2852742c610bbeebac9d8cfcadaf8996358ef7f195fb2 WatchSource:0}: Error finding container 69971148b579b659d7b2852742c610bbeebac9d8cfcadaf8996358ef7f195fb2: Status 404 returned error can't find the container with id 69971148b579b659d7b2852742c610bbeebac9d8cfcadaf8996358ef7f195fb2 Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.740731 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.788288 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.790303 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4675]: I1121 13:57:41.956527 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-8wkr9"] Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.063858 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-n65cw"] Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.083597 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2jx6q"] Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.329033 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-w28m5"] Nov 21 13:57:42 crc kubenswrapper[4675]: W1121 13:57:42.337920 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8406cb5_f871_4355_811c_7090afd8aa2e.slice/crio-dd008beade3f9ea953bed91543ce6c517cec17e50feef22cf9b542aa39619a14 WatchSource:0}: Error finding container dd008beade3f9ea953bed91543ce6c517cec17e50feef22cf9b542aa39619a14: Status 404 returned error can't find the container with id dd008beade3f9ea953bed91543ce6c517cec17e50feef22cf9b542aa39619a14 Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.354017 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-g725b"] Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.354343 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2jx6q" event={"ID":"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4","Type":"ContainerStarted","Data":"404db1807f765a07981bb2285fab5fe1c2bf33b384ac4e031196b4d9d286357e"} Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.372467 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-skvhr" event={"ID":"1c814a96-9fce-4e41-a875-749acc27ecd6","Type":"ContainerStarted","Data":"69971148b579b659d7b2852742c610bbeebac9d8cfcadaf8996358ef7f195fb2"} Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.398527 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-n65cw" event={"ID":"baacdfb7-787a-462a-8102-472a47283224","Type":"ContainerStarted","Data":"a4955f99decc7bb4e350cd08abd4342b674910a3ea08e0fdb0f0678c4b0dabc3"} Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.411784 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-skvhr" podStartSLOduration=2.4117612 podStartE2EDuration="2.4117612s" podCreationTimestamp="2025-11-21 13:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:42.402135691 +0000 UTC m=+1539.128550418" watchObservedRunningTime="2025-11-21 13:57:42.4117612 +0000 UTC m=+1539.138175947" Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.415504 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-8wkr9" event={"ID":"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7","Type":"ContainerStarted","Data":"66ca080df643568533d99b76a47a0915bd04b46fd0a91002fcb35fa2828553e9"} Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.423614 4675 generic.go:334] "Generic (PLEG): container finished" podID="4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f" containerID="bb63806ecf49a54d6a4f4fde262b053f392449054f78c21c5d63d2d619d82263" exitCode=0 Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.424779 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" event={"ID":"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f","Type":"ContainerDied","Data":"bb63806ecf49a54d6a4f4fde262b053f392449054f78c21c5d63d2d619d82263"} Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.548869 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.573665 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ljvgq"] Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.630362 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxvbz"] Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.690408 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-rbbch"] Nov 21 13:57:42 crc kubenswrapper[4675]: I1121 13:57:42.833246 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.179926 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.291736 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-ovsdbserver-nb\") pod \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.291815 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-dns-swift-storage-0\") pod \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.291882 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb7q2\" (UniqueName: \"kubernetes.io/projected/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-kube-api-access-bb7q2\") pod \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.291959 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-dns-svc\") pod \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.292059 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-ovsdbserver-sb\") pod \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.292213 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-config\") pod \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\" (UID: \"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f\") " Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.310279 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-kube-api-access-bb7q2" (OuterVolumeSpecName: "kube-api-access-bb7q2") pod "4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f" (UID: "4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f"). InnerVolumeSpecName "kube-api-access-bb7q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.324090 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f" (UID: "4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.333323 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f" (UID: "4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.347540 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f" (UID: "4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.360674 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f" (UID: "4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.369376 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-config" (OuterVolumeSpecName: "config") pod "4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f" (UID: "4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.395116 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.395149 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.395162 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.395171 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.395183 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.395191 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb7q2\" (UniqueName: \"kubernetes.io/projected/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f-kube-api-access-bb7q2\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.434967 4675 generic.go:334] "Generic (PLEG): container finished" podID="0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7" containerID="e380c7d54cf608a8ed287f249bf4657d2a6f4f657a5202e21b54506f842b6af2" exitCode=0 Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.435087 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-8wkr9" event={"ID":"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7","Type":"ContainerDied","Data":"e380c7d54cf608a8ed287f249bf4657d2a6f4f657a5202e21b54506f842b6af2"} Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.436788 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e07f71-8bfd-45f4-b54f-775cbce3b611" containerID="95ab690219c8ccfa8fbf984c09390258d8325ac816acaf75fad2e6fd73dc6168" exitCode=0 Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.436862 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-rbbch" event={"ID":"94e07f71-8bfd-45f4-b54f-775cbce3b611","Type":"ContainerDied","Data":"95ab690219c8ccfa8fbf984c09390258d8325ac816acaf75fad2e6fd73dc6168"} Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.436903 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-rbbch" event={"ID":"94e07f71-8bfd-45f4-b54f-775cbce3b611","Type":"ContainerStarted","Data":"af5cdc71154b1e41be156d360049b69f5f10e2fbb8cacc805f61d76facccdb32"} Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.437833 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87a33291-326b-4010-a851-ec2d41c8a754","Type":"ContainerStarted","Data":"16ec8eaf3bb4f0db4554d179856d629e9a3dabab2a0c31bf59491c01a81efee2"} Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.448548 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.448558 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-zzn4m" event={"ID":"4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f","Type":"ContainerDied","Data":"421510a0a3996111dcd0a3652f8c639fd634e63f8af7dd5a2b198bcec1aa3bcf"} Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.448622 4675 scope.go:117] "RemoveContainer" containerID="bb63806ecf49a54d6a4f4fde262b053f392449054f78c21c5d63d2d619d82263" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.451741 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2jx6q" event={"ID":"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4","Type":"ContainerStarted","Data":"2e4a3d5ac8e49d5a151ff61fc91301ab58ef7aa6799a5793a2491c3d0dabce64"} Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.454164 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ljvgq" event={"ID":"9ef50e12-86e6-4c25-b99e-4fc6506d3890","Type":"ContainerStarted","Data":"5ba3bb58e75f108d6c776f77fb36eba97d86942b477e78940626b487e698b73b"} Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.466446 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w28m5" event={"ID":"d8406cb5-f871-4355-811c-7090afd8aa2e","Type":"ContainerStarted","Data":"dd008beade3f9ea953bed91543ce6c517cec17e50feef22cf9b542aa39619a14"} Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.472474 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-g725b" event={"ID":"b87d2cb3-6cf6-4f8e-ad16-021304428c63","Type":"ContainerStarted","Data":"d82940738edf47a2e15a7bae73b233f0cb99331e3c630cef30ec3fdadae8a7d1"} Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.478978 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-skvhr" event={"ID":"1c814a96-9fce-4e41-a875-749acc27ecd6","Type":"ContainerStarted","Data":"1bbb88bd9afbd086aca21c08e19da8b038a0c676dcc1d99e006575a2fb40e964"} Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.521369 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-2jx6q" podStartSLOduration=3.521348129 podStartE2EDuration="3.521348129s" podCreationTimestamp="2025-11-21 13:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:43.515058543 +0000 UTC m=+1540.241473270" watchObservedRunningTime="2025-11-21 13:57:43.521348129 +0000 UTC m=+1540.247762856" Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.589736 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zzn4m"] Nov 21 13:57:43 crc kubenswrapper[4675]: I1121 13:57:43.643652 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zzn4m"] Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.249177 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.359919 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-config\") pod \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.360051 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-ovsdbserver-sb\") pod \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.360113 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-dns-swift-storage-0\") pod \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.360166 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd6hp\" (UniqueName: \"kubernetes.io/projected/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-kube-api-access-gd6hp\") pod \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.360253 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-ovsdbserver-nb\") pod \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.360347 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-dns-svc\") pod \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\" (UID: \"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7\") " Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.372014 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-kube-api-access-gd6hp" (OuterVolumeSpecName: "kube-api-access-gd6hp") pod "0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7" (UID: "0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7"). InnerVolumeSpecName "kube-api-access-gd6hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.408247 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7" (UID: "0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.423960 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7" (UID: "0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.443688 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7" (UID: "0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.443748 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7" (UID: "0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.463759 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.463796 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.463810 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.463831 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.463844 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd6hp\" (UniqueName: \"kubernetes.io/projected/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-kube-api-access-gd6hp\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.467167 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-config" (OuterVolumeSpecName: "config") pod "0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7" (UID: "0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.525919 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-8wkr9" event={"ID":"0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7","Type":"ContainerDied","Data":"66ca080df643568533d99b76a47a0915bd04b46fd0a91002fcb35fa2828553e9"} Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.525980 4675 scope.go:117] "RemoveContainer" containerID="e380c7d54cf608a8ed287f249bf4657d2a6f4f657a5202e21b54506f842b6af2" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.526170 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-8wkr9" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.551959 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-rbbch" event={"ID":"94e07f71-8bfd-45f4-b54f-775cbce3b611","Type":"ContainerStarted","Data":"e1b35fa9a394b0b13af8de47ece6a09ec0bae5ab6aa95b2f724c589931e67ee9"} Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.552077 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.566126 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.584456 4675 generic.go:334] "Generic (PLEG): container finished" podID="8e21ce4f-da1d-4f89-8f41-6bb22c247d04" containerID="7ac16f0c69bc28cf9b0096e41a231efb4ec8780f387b78ba3cfd646fe897286d" exitCode=0 Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.584873 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pxvbz" podUID="1d02a60a-11f5-447f-a27f-b4c6a7457c26" containerName="registry-server" containerID="cri-o://a626cde9fcd740b77fd7ae6d7b843db397624a2a668b8dfcdc5b6648f12b584c" gracePeriod=2 Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.585249 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8d6qh" event={"ID":"8e21ce4f-da1d-4f89-8f41-6bb22c247d04","Type":"ContainerDied","Data":"7ac16f0c69bc28cf9b0096e41a231efb4ec8780f387b78ba3cfd646fe897286d"} Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.615655 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-rbbch" podStartSLOduration=3.615631558 podStartE2EDuration="3.615631558s" podCreationTimestamp="2025-11-21 13:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:44.592583506 +0000 UTC m=+1541.318998233" watchObservedRunningTime="2025-11-21 13:57:44.615631558 +0000 UTC m=+1541.342046295" Nov 21 13:57:44 crc kubenswrapper[4675]: I1121 13:57:44.884633 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f" path="/var/lib/kubelet/pods/4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f/volumes" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.069374 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-8wkr9"] Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.081808 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-8wkr9"] Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.252459 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.433898 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.496781 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d02a60a-11f5-447f-a27f-b4c6a7457c26-utilities\") pod \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\" (UID: \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\") " Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.497350 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwmjs\" (UniqueName: \"kubernetes.io/projected/1d02a60a-11f5-447f-a27f-b4c6a7457c26-kube-api-access-jwmjs\") pod \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\" (UID: \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\") " Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.497702 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d02a60a-11f5-447f-a27f-b4c6a7457c26-utilities" (OuterVolumeSpecName: "utilities") pod "1d02a60a-11f5-447f-a27f-b4c6a7457c26" (UID: "1d02a60a-11f5-447f-a27f-b4c6a7457c26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.498211 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d02a60a-11f5-447f-a27f-b4c6a7457c26-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.517915 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d02a60a-11f5-447f-a27f-b4c6a7457c26-kube-api-access-jwmjs" (OuterVolumeSpecName: "kube-api-access-jwmjs") pod "1d02a60a-11f5-447f-a27f-b4c6a7457c26" (UID: "1d02a60a-11f5-447f-a27f-b4c6a7457c26"). InnerVolumeSpecName "kube-api-access-jwmjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.599263 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d02a60a-11f5-447f-a27f-b4c6a7457c26-catalog-content\") pod \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\" (UID: \"1d02a60a-11f5-447f-a27f-b4c6a7457c26\") " Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.599969 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwmjs\" (UniqueName: \"kubernetes.io/projected/1d02a60a-11f5-447f-a27f-b4c6a7457c26-kube-api-access-jwmjs\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.620352 4675 generic.go:334] "Generic (PLEG): container finished" podID="1d02a60a-11f5-447f-a27f-b4c6a7457c26" containerID="a626cde9fcd740b77fd7ae6d7b843db397624a2a668b8dfcdc5b6648f12b584c" exitCode=0 Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.620626 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxvbz" event={"ID":"1d02a60a-11f5-447f-a27f-b4c6a7457c26","Type":"ContainerDied","Data":"a626cde9fcd740b77fd7ae6d7b843db397624a2a668b8dfcdc5b6648f12b584c"} Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.620860 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxvbz" event={"ID":"1d02a60a-11f5-447f-a27f-b4c6a7457c26","Type":"ContainerDied","Data":"72c5e1a64f094933420405b17a56fa8838e6b3cc1aae84bb7082e61e19f3afd3"} Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.620811 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxvbz" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.623103 4675 scope.go:117] "RemoveContainer" containerID="a626cde9fcd740b77fd7ae6d7b843db397624a2a668b8dfcdc5b6648f12b584c" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.634774 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d02a60a-11f5-447f-a27f-b4c6a7457c26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d02a60a-11f5-447f-a27f-b4c6a7457c26" (UID: "1d02a60a-11f5-447f-a27f-b4c6a7457c26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.657221 4675 scope.go:117] "RemoveContainer" containerID="00a08ac972fcea68d3dac4670705202599d150b98103bb8f36142b59a3d17cf4" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.702479 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d02a60a-11f5-447f-a27f-b4c6a7457c26-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.728359 4675 scope.go:117] "RemoveContainer" containerID="b6f46e802b85c34405e9fca18d1eabaa11acfd3289e050ec71b892933722183c" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.758184 4675 scope.go:117] "RemoveContainer" containerID="a626cde9fcd740b77fd7ae6d7b843db397624a2a668b8dfcdc5b6648f12b584c" Nov 21 13:57:45 crc kubenswrapper[4675]: E1121 13:57:45.758782 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a626cde9fcd740b77fd7ae6d7b843db397624a2a668b8dfcdc5b6648f12b584c\": container with ID starting with a626cde9fcd740b77fd7ae6d7b843db397624a2a668b8dfcdc5b6648f12b584c not found: ID does not exist" containerID="a626cde9fcd740b77fd7ae6d7b843db397624a2a668b8dfcdc5b6648f12b584c" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.758860 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a626cde9fcd740b77fd7ae6d7b843db397624a2a668b8dfcdc5b6648f12b584c"} err="failed to get container status \"a626cde9fcd740b77fd7ae6d7b843db397624a2a668b8dfcdc5b6648f12b584c\": rpc error: code = NotFound desc = could not find container \"a626cde9fcd740b77fd7ae6d7b843db397624a2a668b8dfcdc5b6648f12b584c\": container with ID starting with a626cde9fcd740b77fd7ae6d7b843db397624a2a668b8dfcdc5b6648f12b584c not found: ID does not exist" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.758892 4675 scope.go:117] "RemoveContainer" containerID="00a08ac972fcea68d3dac4670705202599d150b98103bb8f36142b59a3d17cf4" Nov 21 13:57:45 crc kubenswrapper[4675]: E1121 13:57:45.759555 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00a08ac972fcea68d3dac4670705202599d150b98103bb8f36142b59a3d17cf4\": container with ID starting with 00a08ac972fcea68d3dac4670705202599d150b98103bb8f36142b59a3d17cf4 not found: ID does not exist" containerID="00a08ac972fcea68d3dac4670705202599d150b98103bb8f36142b59a3d17cf4" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.759602 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00a08ac972fcea68d3dac4670705202599d150b98103bb8f36142b59a3d17cf4"} err="failed to get container status \"00a08ac972fcea68d3dac4670705202599d150b98103bb8f36142b59a3d17cf4\": rpc error: code = NotFound desc = could not find container \"00a08ac972fcea68d3dac4670705202599d150b98103bb8f36142b59a3d17cf4\": container with ID starting with 00a08ac972fcea68d3dac4670705202599d150b98103bb8f36142b59a3d17cf4 not found: ID does not exist" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.759632 4675 scope.go:117] "RemoveContainer" containerID="b6f46e802b85c34405e9fca18d1eabaa11acfd3289e050ec71b892933722183c" Nov 21 13:57:45 crc kubenswrapper[4675]: E1121 13:57:45.759962 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6f46e802b85c34405e9fca18d1eabaa11acfd3289e050ec71b892933722183c\": container with ID starting with b6f46e802b85c34405e9fca18d1eabaa11acfd3289e050ec71b892933722183c not found: ID does not exist" containerID="b6f46e802b85c34405e9fca18d1eabaa11acfd3289e050ec71b892933722183c" Nov 21 13:57:45 crc kubenswrapper[4675]: I1121 13:57:45.759985 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6f46e802b85c34405e9fca18d1eabaa11acfd3289e050ec71b892933722183c"} err="failed to get container status \"b6f46e802b85c34405e9fca18d1eabaa11acfd3289e050ec71b892933722183c\": rpc error: code = NotFound desc = could not find container \"b6f46e802b85c34405e9fca18d1eabaa11acfd3289e050ec71b892933722183c\": container with ID starting with b6f46e802b85c34405e9fca18d1eabaa11acfd3289e050ec71b892933722183c not found: ID does not exist" Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.037627 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxvbz"] Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.066143 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxvbz"] Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.412594 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.537016 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-db-sync-config-data\") pod \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.537442 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-config-data\") pod \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.538144 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-combined-ca-bundle\") pod \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.538208 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqfg6\" (UniqueName: \"kubernetes.io/projected/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-kube-api-access-hqfg6\") pod \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\" (UID: \"8e21ce4f-da1d-4f89-8f41-6bb22c247d04\") " Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.547132 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8e21ce4f-da1d-4f89-8f41-6bb22c247d04" (UID: "8e21ce4f-da1d-4f89-8f41-6bb22c247d04"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.558395 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-kube-api-access-hqfg6" (OuterVolumeSpecName: "kube-api-access-hqfg6") pod "8e21ce4f-da1d-4f89-8f41-6bb22c247d04" (UID: "8e21ce4f-da1d-4f89-8f41-6bb22c247d04"). InnerVolumeSpecName "kube-api-access-hqfg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.584288 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e21ce4f-da1d-4f89-8f41-6bb22c247d04" (UID: "8e21ce4f-da1d-4f89-8f41-6bb22c247d04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.635967 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-config-data" (OuterVolumeSpecName: "config-data") pod "8e21ce4f-da1d-4f89-8f41-6bb22c247d04" (UID: "8e21ce4f-da1d-4f89-8f41-6bb22c247d04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.640815 4675 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.640862 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.640874 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.640886 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqfg6\" (UniqueName: \"kubernetes.io/projected/8e21ce4f-da1d-4f89-8f41-6bb22c247d04-kube-api-access-hqfg6\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.651415 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8d6qh" event={"ID":"8e21ce4f-da1d-4f89-8f41-6bb22c247d04","Type":"ContainerDied","Data":"8f6be9af7126203161c448eff4ebf6a3d51f9988e3e8ccaab58c4eb0f4f42fba"} Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.651477 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f6be9af7126203161c448eff4ebf6a3d51f9988e3e8ccaab58c4eb0f4f42fba" Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.651557 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8d6qh" Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.893492 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7" path="/var/lib/kubelet/pods/0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7/volumes" Nov 21 13:57:46 crc kubenswrapper[4675]: I1121 13:57:46.894659 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d02a60a-11f5-447f-a27f-b4c6a7457c26" path="/var/lib/kubelet/pods/1d02a60a-11f5-447f-a27f-b4c6a7457c26/volumes" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.086801 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-rbbch"] Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.087077 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-rbbch" podUID="94e07f71-8bfd-45f4-b54f-775cbce3b611" containerName="dnsmasq-dns" containerID="cri-o://e1b35fa9a394b0b13af8de47ece6a09ec0bae5ab6aa95b2f724c589931e67ee9" gracePeriod=10 Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.118179 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-jpn8w"] Nov 21 13:57:47 crc kubenswrapper[4675]: E1121 13:57:47.118630 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d02a60a-11f5-447f-a27f-b4c6a7457c26" containerName="registry-server" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.118653 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d02a60a-11f5-447f-a27f-b4c6a7457c26" containerName="registry-server" Nov 21 13:57:47 crc kubenswrapper[4675]: E1121 13:57:47.118670 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d02a60a-11f5-447f-a27f-b4c6a7457c26" containerName="extract-content" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.118676 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d02a60a-11f5-447f-a27f-b4c6a7457c26" containerName="extract-content" Nov 21 13:57:47 crc kubenswrapper[4675]: E1121 13:57:47.118698 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f" containerName="init" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.118705 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f" containerName="init" Nov 21 13:57:47 crc kubenswrapper[4675]: E1121 13:57:47.118714 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e21ce4f-da1d-4f89-8f41-6bb22c247d04" containerName="glance-db-sync" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.118720 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e21ce4f-da1d-4f89-8f41-6bb22c247d04" containerName="glance-db-sync" Nov 21 13:57:47 crc kubenswrapper[4675]: E1121 13:57:47.118749 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7" containerName="init" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.118757 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7" containerName="init" Nov 21 13:57:47 crc kubenswrapper[4675]: E1121 13:57:47.118772 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d02a60a-11f5-447f-a27f-b4c6a7457c26" containerName="extract-utilities" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.118778 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d02a60a-11f5-447f-a27f-b4c6a7457c26" containerName="extract-utilities" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.118966 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e21ce4f-da1d-4f89-8f41-6bb22c247d04" containerName="glance-db-sync" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.118982 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef3ef1c-be4d-4e91-ba6d-3be6bdf315f7" containerName="init" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.118997 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d02a60a-11f5-447f-a27f-b4c6a7457c26" containerName="registry-server" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.119010 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f830fd1-55c9-4bee-8f52-9ea73bb3cb4f" containerName="init" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.122260 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.165237 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-jpn8w"] Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.260144 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgj62\" (UniqueName: \"kubernetes.io/projected/ef666cb3-3002-47d0-9aec-c53581c6a688-kube-api-access-vgj62\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.260200 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.260221 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.260300 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.260384 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-config\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.260419 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.363741 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.363791 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.363889 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.363979 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-config\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.364014 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.364134 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgj62\" (UniqueName: \"kubernetes.io/projected/ef666cb3-3002-47d0-9aec-c53581c6a688-kube-api-access-vgj62\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.365431 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-config\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.365522 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.366394 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.366867 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.368059 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.408012 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgj62\" (UniqueName: \"kubernetes.io/projected/ef666cb3-3002-47d0-9aec-c53581c6a688-kube-api-access-vgj62\") pod \"dnsmasq-dns-56df8fb6b7-jpn8w\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.454629 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.720826 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e07f71-8bfd-45f4-b54f-775cbce3b611" containerID="e1b35fa9a394b0b13af8de47ece6a09ec0bae5ab6aa95b2f724c589931e67ee9" exitCode=0 Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.720905 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-rbbch" event={"ID":"94e07f71-8bfd-45f4-b54f-775cbce3b611","Type":"ContainerDied","Data":"e1b35fa9a394b0b13af8de47ece6a09ec0bae5ab6aa95b2f724c589931e67ee9"} Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.720935 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-rbbch" event={"ID":"94e07f71-8bfd-45f4-b54f-775cbce3b611","Type":"ContainerDied","Data":"af5cdc71154b1e41be156d360049b69f5f10e2fbb8cacc805f61d76facccdb32"} Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.720974 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5cdc71154b1e41be156d360049b69f5f10e2fbb8cacc805f61d76facccdb32" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.761450 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.876601 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-dns-svc\") pod \"94e07f71-8bfd-45f4-b54f-775cbce3b611\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.876653 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-ovsdbserver-sb\") pod \"94e07f71-8bfd-45f4-b54f-775cbce3b611\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.876685 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvh8r\" (UniqueName: \"kubernetes.io/projected/94e07f71-8bfd-45f4-b54f-775cbce3b611-kube-api-access-dvh8r\") pod \"94e07f71-8bfd-45f4-b54f-775cbce3b611\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.876726 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-dns-swift-storage-0\") pod \"94e07f71-8bfd-45f4-b54f-775cbce3b611\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.876892 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-ovsdbserver-nb\") pod \"94e07f71-8bfd-45f4-b54f-775cbce3b611\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.876992 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-config\") pod \"94e07f71-8bfd-45f4-b54f-775cbce3b611\" (UID: \"94e07f71-8bfd-45f4-b54f-775cbce3b611\") " Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.891205 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e07f71-8bfd-45f4-b54f-775cbce3b611-kube-api-access-dvh8r" (OuterVolumeSpecName: "kube-api-access-dvh8r") pod "94e07f71-8bfd-45f4-b54f-775cbce3b611" (UID: "94e07f71-8bfd-45f4-b54f-775cbce3b611"). InnerVolumeSpecName "kube-api-access-dvh8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:47 crc kubenswrapper[4675]: I1121 13:57:47.979489 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvh8r\" (UniqueName: \"kubernetes.io/projected/94e07f71-8bfd-45f4-b54f-775cbce3b611-kube-api-access-dvh8r\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:47.993282 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:57:48 crc kubenswrapper[4675]: E1121 13:57:47.993909 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e07f71-8bfd-45f4-b54f-775cbce3b611" containerName="init" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:47.993928 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e07f71-8bfd-45f4-b54f-775cbce3b611" containerName="init" Nov 21 13:57:48 crc kubenswrapper[4675]: E1121 13:57:47.993992 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e07f71-8bfd-45f4-b54f-775cbce3b611" containerName="dnsmasq-dns" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:47.994000 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e07f71-8bfd-45f4-b54f-775cbce3b611" containerName="dnsmasq-dns" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:47.994279 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="94e07f71-8bfd-45f4-b54f-775cbce3b611" containerName="dnsmasq-dns" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:47.995774 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.000559 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.002682 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.016016 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bb8kq" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.085941 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.086125 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-logs\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.086167 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-config-data\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.086279 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.086303 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.086442 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzknm\" (UniqueName: \"kubernetes.io/projected/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-kube-api-access-xzknm\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.086473 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-scripts\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.149573 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "94e07f71-8bfd-45f4-b54f-775cbce3b611" (UID: "94e07f71-8bfd-45f4-b54f-775cbce3b611"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.154608 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "94e07f71-8bfd-45f4-b54f-775cbce3b611" (UID: "94e07f71-8bfd-45f4-b54f-775cbce3b611"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.188449 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.188764 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-logs\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.188815 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-config-data\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.189006 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.189033 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.189346 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzknm\" (UniqueName: \"kubernetes.io/projected/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-kube-api-access-xzknm\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.189397 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-scripts\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.189484 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.189499 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.190981 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.191284 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-logs\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.199031 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.199987 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-config" (OuterVolumeSpecName: "config") pod "94e07f71-8bfd-45f4-b54f-775cbce3b611" (UID: "94e07f71-8bfd-45f4-b54f-775cbce3b611"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.230177 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.232654 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "94e07f71-8bfd-45f4-b54f-775cbce3b611" (UID: "94e07f71-8bfd-45f4-b54f-775cbce3b611"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.252240 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-scripts\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.252785 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.253677 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "94e07f71-8bfd-45f4-b54f-775cbce3b611" (UID: "94e07f71-8bfd-45f4-b54f-775cbce3b611"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.266866 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzknm\" (UniqueName: \"kubernetes.io/projected/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-kube-api-access-xzknm\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.275978 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-config-data\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.298618 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.298656 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.298670 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e07f71-8bfd-45f4-b54f-775cbce3b611-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.347705 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.349982 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.357332 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.422230 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.480870 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.504326 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae1eb473-8c76-4999-91a1-0242a242ec8a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.504401 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.505985 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r58fw\" (UniqueName: \"kubernetes.io/projected/ae1eb473-8c76-4999-91a1-0242a242ec8a-kube-api-access-r58fw\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.506019 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.506184 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.506253 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae1eb473-8c76-4999-91a1-0242a242ec8a-logs\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.506307 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.520250 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-jpn8w"] Nov 21 13:57:48 crc kubenswrapper[4675]: W1121 13:57:48.528353 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef666cb3_3002_47d0_9aec_c53581c6a688.slice/crio-4949a70345aba9b50166738f4d9edbfe596eedcc0a31da56293878c7b59acbfe WatchSource:0}: Error finding container 4949a70345aba9b50166738f4d9edbfe596eedcc0a31da56293878c7b59acbfe: Status 404 returned error can't find the container with id 4949a70345aba9b50166738f4d9edbfe596eedcc0a31da56293878c7b59acbfe Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.547099 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.572044 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.616408 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae1eb473-8c76-4999-91a1-0242a242ec8a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.616479 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.616624 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r58fw\" (UniqueName: \"kubernetes.io/projected/ae1eb473-8c76-4999-91a1-0242a242ec8a-kube-api-access-r58fw\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.616665 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.616767 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.616817 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae1eb473-8c76-4999-91a1-0242a242ec8a-logs\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.616861 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.619169 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.623749 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae1eb473-8c76-4999-91a1-0242a242ec8a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.632605 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.634596 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.636496 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae1eb473-8c76-4999-91a1-0242a242ec8a-logs\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.642582 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r58fw\" (UniqueName: \"kubernetes.io/projected/ae1eb473-8c76-4999-91a1-0242a242ec8a-kube-api-access-r58fw\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.654333 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.718130 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.725121 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qpck5"] Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.739614 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.745682 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-rbbch" Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.746736 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" event={"ID":"ef666cb3-3002-47d0-9aec-c53581c6a688","Type":"ContainerStarted","Data":"4949a70345aba9b50166738f4d9edbfe596eedcc0a31da56293878c7b59acbfe"} Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.747173 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qpck5" podUID="01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" containerName="registry-server" containerID="cri-o://08fb1d4b15c1e7375f06808ccb0990abd2567d630091bb0fd41d8a07ccea552f" gracePeriod=2 Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.803601 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-rbbch"] Nov 21 13:57:48 crc kubenswrapper[4675]: I1121 13:57:48.883374 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-rbbch"] Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.351504 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:57:49 crc kubenswrapper[4675]: W1121 13:57:49.415798 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc65ece34_5cb7_4f87_84cb_b6a5fa1449b2.slice/crio-2f91fce3378bd61d02e2fe1200adbdceb3ee80095d290d0d2062e1b3ea06b0dc WatchSource:0}: Error finding container 2f91fce3378bd61d02e2fe1200adbdceb3ee80095d290d0d2062e1b3ea06b0dc: Status 404 returned error can't find the container with id 2f91fce3378bd61d02e2fe1200adbdceb3ee80095d290d0d2062e1b3ea06b0dc Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.637168 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.663597 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-utilities\") pod \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\" (UID: \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\") " Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.664299 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-catalog-content\") pod \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\" (UID: \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\") " Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.664379 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvwbh\" (UniqueName: \"kubernetes.io/projected/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-kube-api-access-hvwbh\") pod \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\" (UID: \"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a\") " Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.664999 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-utilities" (OuterVolumeSpecName: "utilities") pod "01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" (UID: "01b5c9cb-9a2f-471c-894d-349dd2f1ad8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.665297 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.677263 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-kube-api-access-hvwbh" (OuterVolumeSpecName: "kube-api-access-hvwbh") pod "01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" (UID: "01b5c9cb-9a2f-471c-894d-349dd2f1ad8a"). InnerVolumeSpecName "kube-api-access-hvwbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.722291 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" (UID: "01b5c9cb-9a2f-471c-894d-349dd2f1ad8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.767879 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.767919 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvwbh\" (UniqueName: \"kubernetes.io/projected/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a-kube-api-access-hvwbh\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.790952 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2","Type":"ContainerStarted","Data":"2f91fce3378bd61d02e2fe1200adbdceb3ee80095d290d0d2062e1b3ea06b0dc"} Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.830231 4675 generic.go:334] "Generic (PLEG): container finished" podID="ef666cb3-3002-47d0-9aec-c53581c6a688" containerID="e98496fc2fabb24b559772bb16ee79109e85c6fb8a07f7b3cf9008141fd62f98" exitCode=0 Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.830317 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" event={"ID":"ef666cb3-3002-47d0-9aec-c53581c6a688","Type":"ContainerDied","Data":"e98496fc2fabb24b559772bb16ee79109e85c6fb8a07f7b3cf9008141fd62f98"} Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.837489 4675 generic.go:334] "Generic (PLEG): container finished" podID="01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" containerID="08fb1d4b15c1e7375f06808ccb0990abd2567d630091bb0fd41d8a07ccea552f" exitCode=0 Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.837578 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpck5" Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.837573 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpck5" event={"ID":"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a","Type":"ContainerDied","Data":"08fb1d4b15c1e7375f06808ccb0990abd2567d630091bb0fd41d8a07ccea552f"} Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.837627 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpck5" event={"ID":"01b5c9cb-9a2f-471c-894d-349dd2f1ad8a","Type":"ContainerDied","Data":"dac4eb45e0f37a0810ca08c55442bf524166b62d466b809c6693f46e90dff6a1"} Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.837649 4675 scope.go:117] "RemoveContainer" containerID="08fb1d4b15c1e7375f06808ccb0990abd2567d630091bb0fd41d8a07ccea552f" Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.841144 4675 generic.go:334] "Generic (PLEG): container finished" podID="1c814a96-9fce-4e41-a875-749acc27ecd6" containerID="1bbb88bd9afbd086aca21c08e19da8b038a0c676dcc1d99e006575a2fb40e964" exitCode=0 Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.841179 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-skvhr" event={"ID":"1c814a96-9fce-4e41-a875-749acc27ecd6","Type":"ContainerDied","Data":"1bbb88bd9afbd086aca21c08e19da8b038a0c676dcc1d99e006575a2fb40e964"} Nov 21 13:57:49 crc kubenswrapper[4675]: I1121 13:57:49.946192 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.023779 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qpck5"] Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.041341 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qpck5"] Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.112409 4675 scope.go:117] "RemoveContainer" containerID="71769ae5d6b84e6e7e82d990e1af5cd36eaa51a9ef6b719499613aec26f856b6" Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.144611 4675 scope.go:117] "RemoveContainer" containerID="31b4590c60d7c8bcc2eb70ed3bf537128cc4c00975d487d3c03051bb3f8d4df5" Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.374102 4675 scope.go:117] "RemoveContainer" containerID="08fb1d4b15c1e7375f06808ccb0990abd2567d630091bb0fd41d8a07ccea552f" Nov 21 13:57:50 crc kubenswrapper[4675]: E1121 13:57:50.381310 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08fb1d4b15c1e7375f06808ccb0990abd2567d630091bb0fd41d8a07ccea552f\": container with ID starting with 08fb1d4b15c1e7375f06808ccb0990abd2567d630091bb0fd41d8a07ccea552f not found: ID does not exist" containerID="08fb1d4b15c1e7375f06808ccb0990abd2567d630091bb0fd41d8a07ccea552f" Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.381370 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08fb1d4b15c1e7375f06808ccb0990abd2567d630091bb0fd41d8a07ccea552f"} err="failed to get container status \"08fb1d4b15c1e7375f06808ccb0990abd2567d630091bb0fd41d8a07ccea552f\": rpc error: code = NotFound desc = could not find container \"08fb1d4b15c1e7375f06808ccb0990abd2567d630091bb0fd41d8a07ccea552f\": container with ID starting with 08fb1d4b15c1e7375f06808ccb0990abd2567d630091bb0fd41d8a07ccea552f not found: ID does not exist" Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.381403 4675 scope.go:117] "RemoveContainer" containerID="71769ae5d6b84e6e7e82d990e1af5cd36eaa51a9ef6b719499613aec26f856b6" Nov 21 13:57:50 crc kubenswrapper[4675]: E1121 13:57:50.382335 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71769ae5d6b84e6e7e82d990e1af5cd36eaa51a9ef6b719499613aec26f856b6\": container with ID starting with 71769ae5d6b84e6e7e82d990e1af5cd36eaa51a9ef6b719499613aec26f856b6 not found: ID does not exist" containerID="71769ae5d6b84e6e7e82d990e1af5cd36eaa51a9ef6b719499613aec26f856b6" Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.382376 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71769ae5d6b84e6e7e82d990e1af5cd36eaa51a9ef6b719499613aec26f856b6"} err="failed to get container status \"71769ae5d6b84e6e7e82d990e1af5cd36eaa51a9ef6b719499613aec26f856b6\": rpc error: code = NotFound desc = could not find container \"71769ae5d6b84e6e7e82d990e1af5cd36eaa51a9ef6b719499613aec26f856b6\": container with ID starting with 71769ae5d6b84e6e7e82d990e1af5cd36eaa51a9ef6b719499613aec26f856b6 not found: ID does not exist" Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.382402 4675 scope.go:117] "RemoveContainer" containerID="31b4590c60d7c8bcc2eb70ed3bf537128cc4c00975d487d3c03051bb3f8d4df5" Nov 21 13:57:50 crc kubenswrapper[4675]: E1121 13:57:50.383802 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31b4590c60d7c8bcc2eb70ed3bf537128cc4c00975d487d3c03051bb3f8d4df5\": container with ID starting with 31b4590c60d7c8bcc2eb70ed3bf537128cc4c00975d487d3c03051bb3f8d4df5 not found: ID does not exist" containerID="31b4590c60d7c8bcc2eb70ed3bf537128cc4c00975d487d3c03051bb3f8d4df5" Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.383844 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b4590c60d7c8bcc2eb70ed3bf537128cc4c00975d487d3c03051bb3f8d4df5"} err="failed to get container status \"31b4590c60d7c8bcc2eb70ed3bf537128cc4c00975d487d3c03051bb3f8d4df5\": rpc error: code = NotFound desc = could not find container \"31b4590c60d7c8bcc2eb70ed3bf537128cc4c00975d487d3c03051bb3f8d4df5\": container with ID starting with 31b4590c60d7c8bcc2eb70ed3bf537128cc4c00975d487d3c03051bb3f8d4df5 not found: ID does not exist" Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.886390 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" path="/var/lib/kubelet/pods/01b5c9cb-9a2f-471c-894d-349dd2f1ad8a/volumes" Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.894668 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94e07f71-8bfd-45f4-b54f-775cbce3b611" path="/var/lib/kubelet/pods/94e07f71-8bfd-45f4-b54f-775cbce3b611/volumes" Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.895475 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.895510 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2","Type":"ContainerStarted","Data":"3bd01f78da76f2e55e362f88f592ae07fef4aa76659496b980edb357e425d1a4"} Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.895530 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" event={"ID":"ef666cb3-3002-47d0-9aec-c53581c6a688","Type":"ContainerStarted","Data":"5e284026728b5388c4a81324a90c2445da35d4d6643f6044efd26eeb49c2b5e8"} Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.899042 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae1eb473-8c76-4999-91a1-0242a242ec8a","Type":"ContainerStarted","Data":"b7c964e4f18c494ece37a5620f5168df2f0c4b8c05e431e3c3c9bd2b80af6e1e"} Nov 21 13:57:50 crc kubenswrapper[4675]: I1121 13:57:50.919434 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" podStartSLOduration=3.919415218 podStartE2EDuration="3.919415218s" podCreationTimestamp="2025-11-21 13:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:50.908366934 +0000 UTC m=+1547.634781661" watchObservedRunningTime="2025-11-21 13:57:50.919415218 +0000 UTC m=+1547.645829945" Nov 21 13:57:51 crc kubenswrapper[4675]: I1121 13:57:51.914648 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae1eb473-8c76-4999-91a1-0242a242ec8a","Type":"ContainerStarted","Data":"de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9"} Nov 21 13:57:52 crc kubenswrapper[4675]: I1121 13:57:52.037259 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:57:52 crc kubenswrapper[4675]: I1121 13:57:52.115514 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.002812 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6kxg7"] Nov 21 13:57:53 crc kubenswrapper[4675]: E1121 13:57:53.004197 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" containerName="registry-server" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.004227 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" containerName="registry-server" Nov 21 13:57:53 crc kubenswrapper[4675]: E1121 13:57:53.004266 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" containerName="extract-content" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.004271 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" containerName="extract-content" Nov 21 13:57:53 crc kubenswrapper[4675]: E1121 13:57:53.004289 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" containerName="extract-utilities" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.004295 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" containerName="extract-utilities" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.004569 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b5c9cb-9a2f-471c-894d-349dd2f1ad8a" containerName="registry-server" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.006020 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.017764 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6kxg7"] Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.121109 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d488b\" (UniqueName: \"kubernetes.io/projected/78391e3d-3bab-469a-a163-3729fdf23773-kube-api-access-d488b\") pod \"community-operators-6kxg7\" (UID: \"78391e3d-3bab-469a-a163-3729fdf23773\") " pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.121189 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78391e3d-3bab-469a-a163-3729fdf23773-catalog-content\") pod \"community-operators-6kxg7\" (UID: \"78391e3d-3bab-469a-a163-3729fdf23773\") " pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.121242 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78391e3d-3bab-469a-a163-3729fdf23773-utilities\") pod \"community-operators-6kxg7\" (UID: \"78391e3d-3bab-469a-a163-3729fdf23773\") " pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.222582 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78391e3d-3bab-469a-a163-3729fdf23773-catalog-content\") pod \"community-operators-6kxg7\" (UID: \"78391e3d-3bab-469a-a163-3729fdf23773\") " pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.222654 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78391e3d-3bab-469a-a163-3729fdf23773-utilities\") pod \"community-operators-6kxg7\" (UID: \"78391e3d-3bab-469a-a163-3729fdf23773\") " pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.222825 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d488b\" (UniqueName: \"kubernetes.io/projected/78391e3d-3bab-469a-a163-3729fdf23773-kube-api-access-d488b\") pod \"community-operators-6kxg7\" (UID: \"78391e3d-3bab-469a-a163-3729fdf23773\") " pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.223043 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78391e3d-3bab-469a-a163-3729fdf23773-catalog-content\") pod \"community-operators-6kxg7\" (UID: \"78391e3d-3bab-469a-a163-3729fdf23773\") " pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.223279 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78391e3d-3bab-469a-a163-3729fdf23773-utilities\") pod \"community-operators-6kxg7\" (UID: \"78391e3d-3bab-469a-a163-3729fdf23773\") " pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.240644 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d488b\" (UniqueName: \"kubernetes.io/projected/78391e3d-3bab-469a-a163-3729fdf23773-kube-api-access-d488b\") pod \"community-operators-6kxg7\" (UID: \"78391e3d-3bab-469a-a163-3729fdf23773\") " pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:57:53 crc kubenswrapper[4675]: I1121 13:57:53.345404 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.255924 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.427342 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-fernet-keys\") pod \"1c814a96-9fce-4e41-a875-749acc27ecd6\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.427497 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgvv2\" (UniqueName: \"kubernetes.io/projected/1c814a96-9fce-4e41-a875-749acc27ecd6-kube-api-access-hgvv2\") pod \"1c814a96-9fce-4e41-a875-749acc27ecd6\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.427540 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-combined-ca-bundle\") pod \"1c814a96-9fce-4e41-a875-749acc27ecd6\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.427567 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-scripts\") pod \"1c814a96-9fce-4e41-a875-749acc27ecd6\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.427597 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-credential-keys\") pod \"1c814a96-9fce-4e41-a875-749acc27ecd6\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.427655 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-config-data\") pod \"1c814a96-9fce-4e41-a875-749acc27ecd6\" (UID: \"1c814a96-9fce-4e41-a875-749acc27ecd6\") " Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.434265 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-scripts" (OuterVolumeSpecName: "scripts") pod "1c814a96-9fce-4e41-a875-749acc27ecd6" (UID: "1c814a96-9fce-4e41-a875-749acc27ecd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.435103 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1c814a96-9fce-4e41-a875-749acc27ecd6" (UID: "1c814a96-9fce-4e41-a875-749acc27ecd6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.436558 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1c814a96-9fce-4e41-a875-749acc27ecd6" (UID: "1c814a96-9fce-4e41-a875-749acc27ecd6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.438595 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c814a96-9fce-4e41-a875-749acc27ecd6-kube-api-access-hgvv2" (OuterVolumeSpecName: "kube-api-access-hgvv2") pod "1c814a96-9fce-4e41-a875-749acc27ecd6" (UID: "1c814a96-9fce-4e41-a875-749acc27ecd6"). InnerVolumeSpecName "kube-api-access-hgvv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.465896 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c814a96-9fce-4e41-a875-749acc27ecd6" (UID: "1c814a96-9fce-4e41-a875-749acc27ecd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.477947 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-config-data" (OuterVolumeSpecName: "config-data") pod "1c814a96-9fce-4e41-a875-749acc27ecd6" (UID: "1c814a96-9fce-4e41-a875-749acc27ecd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.529869 4675 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.529902 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgvv2\" (UniqueName: \"kubernetes.io/projected/1c814a96-9fce-4e41-a875-749acc27ecd6-kube-api-access-hgvv2\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.529913 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.529921 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.529929 4675 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.529939 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c814a96-9fce-4e41-a875-749acc27ecd6-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.969647 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2","Type":"ContainerStarted","Data":"1576cc716724c79093bbbcddf36cacbb5d5f45a270646ba3f795d1f2f98898d7"} Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.970236 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" containerName="glance-log" containerID="cri-o://3bd01f78da76f2e55e362f88f592ae07fef4aa76659496b980edb357e425d1a4" gracePeriod=30 Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.970262 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" containerName="glance-httpd" containerID="cri-o://1576cc716724c79093bbbcddf36cacbb5d5f45a270646ba3f795d1f2f98898d7" gracePeriod=30 Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.977410 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-skvhr" event={"ID":"1c814a96-9fce-4e41-a875-749acc27ecd6","Type":"ContainerDied","Data":"69971148b579b659d7b2852742c610bbeebac9d8cfcadaf8996358ef7f195fb2"} Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.977448 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69971148b579b659d7b2852742c610bbeebac9d8cfcadaf8996358ef7f195fb2" Nov 21 13:57:56 crc kubenswrapper[4675]: I1121 13:57:56.977476 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-skvhr" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:56.999938 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.999919350999999 podStartE2EDuration="10.999919351s" podCreationTimestamp="2025-11-21 13:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:56.999218563 +0000 UTC m=+1553.725633320" watchObservedRunningTime="2025-11-21 13:57:56.999919351 +0000 UTC m=+1553.726334078" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.442810 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-skvhr"] Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.451897 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-skvhr"] Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.456937 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.526597 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-wdfqd"] Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.530363 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" podUID="7449ee56-df73-4460-bb87-337a1aab25d6" containerName="dnsmasq-dns" containerID="cri-o://44046f6071f4cbf34e069b862d557701102073dbd31b1ebabc2bf6d669ea1bf1" gracePeriod=10 Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.595027 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wgt2v"] Nov 21 13:57:57 crc kubenswrapper[4675]: E1121 13:57:57.595846 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c814a96-9fce-4e41-a875-749acc27ecd6" containerName="keystone-bootstrap" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.595867 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c814a96-9fce-4e41-a875-749acc27ecd6" containerName="keystone-bootstrap" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.596109 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c814a96-9fce-4e41-a875-749acc27ecd6" containerName="keystone-bootstrap" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.596916 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.616846 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.616879 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tzj4r" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.617027 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.617166 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.617296 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.654919 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wgt2v"] Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.674603 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-scripts\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.674677 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-fernet-keys\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.674855 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-credential-keys\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.674942 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-config-data\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.674995 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-combined-ca-bundle\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.675095 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jpdj\" (UniqueName: \"kubernetes.io/projected/fc50afd7-32f7-4d99-9952-81186547313e-kube-api-access-6jpdj\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.780986 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-credential-keys\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.781088 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-config-data\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.781126 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-combined-ca-bundle\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.781168 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jpdj\" (UniqueName: \"kubernetes.io/projected/fc50afd7-32f7-4d99-9952-81186547313e-kube-api-access-6jpdj\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.785244 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-scripts\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.785308 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-fernet-keys\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.801037 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-fernet-keys\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.808853 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-config-data\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.809906 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-credential-keys\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.810146 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-scripts\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.818998 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jpdj\" (UniqueName: \"kubernetes.io/projected/fc50afd7-32f7-4d99-9952-81186547313e-kube-api-access-6jpdj\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.831342 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-combined-ca-bundle\") pod \"keystone-bootstrap-wgt2v\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.983668 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.997769 4675 generic.go:334] "Generic (PLEG): container finished" podID="7449ee56-df73-4460-bb87-337a1aab25d6" containerID="44046f6071f4cbf34e069b862d557701102073dbd31b1ebabc2bf6d669ea1bf1" exitCode=0 Nov 21 13:57:57 crc kubenswrapper[4675]: I1121 13:57:57.997870 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" event={"ID":"7449ee56-df73-4460-bb87-337a1aab25d6","Type":"ContainerDied","Data":"44046f6071f4cbf34e069b862d557701102073dbd31b1ebabc2bf6d669ea1bf1"} Nov 21 13:57:58 crc kubenswrapper[4675]: I1121 13:57:58.000687 4675 generic.go:334] "Generic (PLEG): container finished" podID="c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" containerID="1576cc716724c79093bbbcddf36cacbb5d5f45a270646ba3f795d1f2f98898d7" exitCode=0 Nov 21 13:57:58 crc kubenswrapper[4675]: I1121 13:57:58.000736 4675 generic.go:334] "Generic (PLEG): container finished" podID="c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" containerID="3bd01f78da76f2e55e362f88f592ae07fef4aa76659496b980edb357e425d1a4" exitCode=143 Nov 21 13:57:58 crc kubenswrapper[4675]: I1121 13:57:58.000763 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2","Type":"ContainerDied","Data":"1576cc716724c79093bbbcddf36cacbb5d5f45a270646ba3f795d1f2f98898d7"} Nov 21 13:57:58 crc kubenswrapper[4675]: I1121 13:57:58.000798 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2","Type":"ContainerDied","Data":"3bd01f78da76f2e55e362f88f592ae07fef4aa76659496b980edb357e425d1a4"} Nov 21 13:57:58 crc kubenswrapper[4675]: I1121 13:57:58.864856 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c814a96-9fce-4e41-a875-749acc27ecd6" path="/var/lib/kubelet/pods/1c814a96-9fce-4e41-a875-749acc27ecd6/volumes" Nov 21 13:58:01 crc kubenswrapper[4675]: I1121 13:58:01.829909 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" podUID="7449ee56-df73-4460-bb87-337a1aab25d6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: connect: connection refused" Nov 21 13:58:06 crc kubenswrapper[4675]: I1121 13:58:06.829788 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" podUID="7449ee56-df73-4460-bb87-337a1aab25d6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: connect: connection refused" Nov 21 13:58:11 crc kubenswrapper[4675]: E1121 13:58:11.431176 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 21 13:58:11 crc kubenswrapper[4675]: E1121 13:58:11.431712 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmcs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-n65cw_openstack(baacdfb7-787a-462a-8102-472a47283224): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:58:11 crc kubenswrapper[4675]: E1121 13:58:11.432870 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-n65cw" podUID="baacdfb7-787a-462a-8102-472a47283224" Nov 21 13:58:11 crc kubenswrapper[4675]: E1121 13:58:11.770330 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 21 13:58:11 crc kubenswrapper[4675]: E1121 13:58:11.770508 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n88h647h6bh76h57bh95h565h74h5f5h697h6h5fchd9h8h5f4hch6dh65ch676h65dh5c8h5dbh5fbhdbhb6h64h66h668h559hb6h5f8h79q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ch8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(87a33291-326b-4010-a851-ec2d41c8a754): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:58:11 crc kubenswrapper[4675]: I1121 13:58:11.830656 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" podUID="7449ee56-df73-4460-bb87-337a1aab25d6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: connect: connection refused" Nov 21 13:58:11 crc kubenswrapper[4675]: I1121 13:58:11.830773 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:58:12 crc kubenswrapper[4675]: E1121 13:58:12.185260 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-n65cw" podUID="baacdfb7-787a-462a-8102-472a47283224" Nov 21 13:58:16 crc kubenswrapper[4675]: I1121 13:58:16.136386 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:58:16 crc kubenswrapper[4675]: I1121 13:58:16.137289 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:58:16 crc kubenswrapper[4675]: I1121 13:58:16.830053 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" podUID="7449ee56-df73-4460-bb87-337a1aab25d6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: connect: connection refused" Nov 21 13:58:18 crc kubenswrapper[4675]: E1121 13:58:18.345425 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 21 13:58:18 crc kubenswrapper[4675]: E1121 13:58:18.346200 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vtdfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-w28m5_openstack(d8406cb5-f871-4355-811c-7090afd8aa2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:58:18 crc kubenswrapper[4675]: E1121 13:58:18.347459 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-w28m5" podUID="d8406cb5-f871-4355-811c-7090afd8aa2e" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.430129 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.607141 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-httpd-run\") pod \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.607216 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzknm\" (UniqueName: \"kubernetes.io/projected/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-kube-api-access-xzknm\") pod \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.607278 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-scripts\") pod \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.607319 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-combined-ca-bundle\") pod \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.607342 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-logs\") pod \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.607473 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.607518 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-config-data\") pod \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\" (UID: \"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2\") " Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.607667 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" (UID: "c65ece34-5cb7-4f87-84cb-b6a5fa1449b2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.608164 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-logs" (OuterVolumeSpecName: "logs") pod "c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" (UID: "c65ece34-5cb7-4f87-84cb-b6a5fa1449b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.608753 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.608779 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.613223 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" (UID: "c65ece34-5cb7-4f87-84cb-b6a5fa1449b2"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.614729 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-kube-api-access-xzknm" (OuterVolumeSpecName: "kube-api-access-xzknm") pod "c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" (UID: "c65ece34-5cb7-4f87-84cb-b6a5fa1449b2"). InnerVolumeSpecName "kube-api-access-xzknm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.628005 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-scripts" (OuterVolumeSpecName: "scripts") pod "c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" (UID: "c65ece34-5cb7-4f87-84cb-b6a5fa1449b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.637982 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" (UID: "c65ece34-5cb7-4f87-84cb-b6a5fa1449b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.666836 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-config-data" (OuterVolumeSpecName: "config-data") pod "c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" (UID: "c65ece34-5cb7-4f87-84cb-b6a5fa1449b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.733398 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzknm\" (UniqueName: \"kubernetes.io/projected/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-kube-api-access-xzknm\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.733442 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.733455 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.733630 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.733656 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.757244 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.834594 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:18 crc kubenswrapper[4675]: E1121 13:58:18.892014 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 21 13:58:18 crc kubenswrapper[4675]: E1121 13:58:18.892165 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrm2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-ljvgq_openstack(9ef50e12-86e6-4c25-b99e-4fc6506d3890): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:58:18 crc kubenswrapper[4675]: E1121 13:58:18.893623 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-ljvgq" podUID="9ef50e12-86e6-4c25-b99e-4fc6506d3890" Nov 21 13:58:18 crc kubenswrapper[4675]: I1121 13:58:18.912035 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.039619 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfflj\" (UniqueName: \"kubernetes.io/projected/7449ee56-df73-4460-bb87-337a1aab25d6-kube-api-access-xfflj\") pod \"7449ee56-df73-4460-bb87-337a1aab25d6\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.039789 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-ovsdbserver-nb\") pod \"7449ee56-df73-4460-bb87-337a1aab25d6\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.039848 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-dns-svc\") pod \"7449ee56-df73-4460-bb87-337a1aab25d6\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.040013 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-ovsdbserver-sb\") pod \"7449ee56-df73-4460-bb87-337a1aab25d6\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.040056 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-config\") pod \"7449ee56-df73-4460-bb87-337a1aab25d6\" (UID: \"7449ee56-df73-4460-bb87-337a1aab25d6\") " Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.049942 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7449ee56-df73-4460-bb87-337a1aab25d6-kube-api-access-xfflj" (OuterVolumeSpecName: "kube-api-access-xfflj") pod "7449ee56-df73-4460-bb87-337a1aab25d6" (UID: "7449ee56-df73-4460-bb87-337a1aab25d6"). InnerVolumeSpecName "kube-api-access-xfflj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.101796 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7449ee56-df73-4460-bb87-337a1aab25d6" (UID: "7449ee56-df73-4460-bb87-337a1aab25d6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.104495 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-config" (OuterVolumeSpecName: "config") pod "7449ee56-df73-4460-bb87-337a1aab25d6" (UID: "7449ee56-df73-4460-bb87-337a1aab25d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.124516 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7449ee56-df73-4460-bb87-337a1aab25d6" (UID: "7449ee56-df73-4460-bb87-337a1aab25d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.128830 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7449ee56-df73-4460-bb87-337a1aab25d6" (UID: "7449ee56-df73-4460-bb87-337a1aab25d6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.146001 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfflj\" (UniqueName: \"kubernetes.io/projected/7449ee56-df73-4460-bb87-337a1aab25d6-kube-api-access-xfflj\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.146041 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.146054 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.146077 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.146085 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7449ee56-df73-4460-bb87-337a1aab25d6-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.253625 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" event={"ID":"7449ee56-df73-4460-bb87-337a1aab25d6","Type":"ContainerDied","Data":"f0c65d21369ddc68768f6958025bc0df2d9439aa93d6d9be50926ece7ad0f091"} Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.253698 4675 scope.go:117] "RemoveContainer" containerID="44046f6071f4cbf34e069b862d557701102073dbd31b1ebabc2bf6d669ea1bf1" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.253639 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-wdfqd" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.257375 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c65ece34-5cb7-4f87-84cb-b6a5fa1449b2","Type":"ContainerDied","Data":"2f91fce3378bd61d02e2fe1200adbdceb3ee80095d290d0d2062e1b3ea06b0dc"} Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.257585 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: E1121 13:58:19.270274 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-ljvgq" podUID="9ef50e12-86e6-4c25-b99e-4fc6506d3890" Nov 21 13:58:19 crc kubenswrapper[4675]: E1121 13:58:19.270521 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-w28m5" podUID="d8406cb5-f871-4355-811c-7090afd8aa2e" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.335425 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-wdfqd"] Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.336720 4675 scope.go:117] "RemoveContainer" containerID="64b3497b7f197b373fa178772c998a38d6cbc6047e0ce5f2862916bdfa42378d" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.349746 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-wdfqd"] Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.390806 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:58:19 crc kubenswrapper[4675]: W1121 13:58:19.397369 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-5674ed90fcb575f98d0e0c5c11484815867539c582fdb5f7d4c6a45728d31a35 WatchSource:0}: Error finding container 5674ed90fcb575f98d0e0c5c11484815867539c582fdb5f7d4c6a45728d31a35: Status 404 returned error can't find the container with id 5674ed90fcb575f98d0e0c5c11484815867539c582fdb5f7d4c6a45728d31a35 Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.405797 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.444658 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6kxg7"] Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.481325 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:58:19 crc kubenswrapper[4675]: E1121 13:58:19.481814 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" containerName="glance-log" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.481831 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" containerName="glance-log" Nov 21 13:58:19 crc kubenswrapper[4675]: E1121 13:58:19.481852 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7449ee56-df73-4460-bb87-337a1aab25d6" containerName="init" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.481860 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7449ee56-df73-4460-bb87-337a1aab25d6" containerName="init" Nov 21 13:58:19 crc kubenswrapper[4675]: E1121 13:58:19.481873 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7449ee56-df73-4460-bb87-337a1aab25d6" containerName="dnsmasq-dns" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.481880 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7449ee56-df73-4460-bb87-337a1aab25d6" containerName="dnsmasq-dns" Nov 21 13:58:19 crc kubenswrapper[4675]: E1121 13:58:19.481921 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" containerName="glance-httpd" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.481927 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" containerName="glance-httpd" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.482188 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" containerName="glance-httpd" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.482208 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" containerName="glance-log" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.482234 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7449ee56-df73-4460-bb87-337a1aab25d6" containerName="dnsmasq-dns" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.483783 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.489708 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.491010 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.497371 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.586391 4675 scope.go:117] "RemoveContainer" containerID="1576cc716724c79093bbbcddf36cacbb5d5f45a270646ba3f795d1f2f98898d7" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.662676 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.662763 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-config-data\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.662795 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/597a2793-4d69-4558-906f-ee005618985e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.662833 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z669l\" (UniqueName: \"kubernetes.io/projected/597a2793-4d69-4558-906f-ee005618985e-kube-api-access-z669l\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.662897 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.664555 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-scripts\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.664724 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.664969 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/597a2793-4d69-4558-906f-ee005618985e-logs\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.707773 4675 scope.go:117] "RemoveContainer" containerID="3bd01f78da76f2e55e362f88f592ae07fef4aa76659496b980edb357e425d1a4" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.787531 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/597a2793-4d69-4558-906f-ee005618985e-logs\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.787592 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.787654 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-config-data\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.787675 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/597a2793-4d69-4558-906f-ee005618985e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.787706 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z669l\" (UniqueName: \"kubernetes.io/projected/597a2793-4d69-4558-906f-ee005618985e-kube-api-access-z669l\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.787738 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.787821 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-scripts\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.787881 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.788130 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/597a2793-4d69-4558-906f-ee005618985e-logs\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.788401 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/597a2793-4d69-4558-906f-ee005618985e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.788405 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.792685 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.794595 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-scripts\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.796977 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-config-data\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.808324 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z669l\" (UniqueName: \"kubernetes.io/projected/597a2793-4d69-4558-906f-ee005618985e-kube-api-access-z669l\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.812761 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.844676 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wgt2v"] Nov 21 13:58:19 crc kubenswrapper[4675]: W1121 13:58:19.853228 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc50afd7_32f7_4d99_9952_81186547313e.slice/crio-7ac9672dbc43814032825256b23696b337a2d2d45e94aaddc4734363af80887a WatchSource:0}: Error finding container 7ac9672dbc43814032825256b23696b337a2d2d45e94aaddc4734363af80887a: Status 404 returned error can't find the container with id 7ac9672dbc43814032825256b23696b337a2d2d45e94aaddc4734363af80887a Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.859080 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 13:58:19 crc kubenswrapper[4675]: I1121 13:58:19.859982 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " pod="openstack/glance-default-external-api-0" Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.122280 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.281232 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87a33291-326b-4010-a851-ec2d41c8a754","Type":"ContainerStarted","Data":"429065759d82e3c2cb6911f24413045efeca9cd4957254ad1098e3f6511b963c"} Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.284126 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-g725b" event={"ID":"b87d2cb3-6cf6-4f8e-ad16-021304428c63","Type":"ContainerStarted","Data":"ed5746704b11be7c1bd3f0297c0df88a6c6f02bce1594d2897c14072fa3763fe"} Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.286198 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wgt2v" event={"ID":"fc50afd7-32f7-4d99-9952-81186547313e","Type":"ContainerStarted","Data":"2fda938354f9b7d247dba067271f91c3707a67f1ee2c440294f91b767b9b1acf"} Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.286239 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wgt2v" event={"ID":"fc50afd7-32f7-4d99-9952-81186547313e","Type":"ContainerStarted","Data":"7ac9672dbc43814032825256b23696b337a2d2d45e94aaddc4734363af80887a"} Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.294520 4675 generic.go:334] "Generic (PLEG): container finished" podID="78391e3d-3bab-469a-a163-3729fdf23773" containerID="cea73c724bad74edea502d5246460078b3bb6d6fd8b1bc6b2c8db476721ac7b2" exitCode=0 Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.294615 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6kxg7" event={"ID":"78391e3d-3bab-469a-a163-3729fdf23773","Type":"ContainerDied","Data":"cea73c724bad74edea502d5246460078b3bb6d6fd8b1bc6b2c8db476721ac7b2"} Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.294642 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6kxg7" event={"ID":"78391e3d-3bab-469a-a163-3729fdf23773","Type":"ContainerStarted","Data":"5674ed90fcb575f98d0e0c5c11484815867539c582fdb5f7d4c6a45728d31a35"} Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.304674 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae1eb473-8c76-4999-91a1-0242a242ec8a","Type":"ContainerStarted","Data":"6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479"} Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.305241 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ae1eb473-8c76-4999-91a1-0242a242ec8a" containerName="glance-log" containerID="cri-o://de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9" gracePeriod=30 Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.305503 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ae1eb473-8c76-4999-91a1-0242a242ec8a" containerName="glance-httpd" containerID="cri-o://6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479" gracePeriod=30 Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.309031 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-g725b" podStartSLOduration=3.780470407 podStartE2EDuration="40.309008313s" podCreationTimestamp="2025-11-21 13:57:40 +0000 UTC" firstStartedPulling="2025-11-21 13:57:42.35861396 +0000 UTC m=+1539.085028687" lastFinishedPulling="2025-11-21 13:58:18.887151866 +0000 UTC m=+1575.613566593" observedRunningTime="2025-11-21 13:58:20.303743101 +0000 UTC m=+1577.030157848" watchObservedRunningTime="2025-11-21 13:58:20.309008313 +0000 UTC m=+1577.035423050" Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.348398 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wgt2v" podStartSLOduration=23.348372186 podStartE2EDuration="23.348372186s" podCreationTimestamp="2025-11-21 13:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:20.320191242 +0000 UTC m=+1577.046605969" watchObservedRunningTime="2025-11-21 13:58:20.348372186 +0000 UTC m=+1577.074786923" Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.405897 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=33.405877853 podStartE2EDuration="33.405877853s" podCreationTimestamp="2025-11-21 13:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:20.4013357 +0000 UTC m=+1577.127750427" watchObservedRunningTime="2025-11-21 13:58:20.405877853 +0000 UTC m=+1577.132292580" Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.726901 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.869515 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7449ee56-df73-4460-bb87-337a1aab25d6" path="/var/lib/kubelet/pods/7449ee56-df73-4460-bb87-337a1aab25d6/volumes" Nov 21 13:58:20 crc kubenswrapper[4675]: I1121 13:58:20.870953 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c65ece34-5cb7-4f87-84cb-b6a5fa1449b2" path="/var/lib/kubelet/pods/c65ece34-5cb7-4f87-84cb-b6a5fa1449b2/volumes" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.002482 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.119305 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae1eb473-8c76-4999-91a1-0242a242ec8a-httpd-run\") pod \"ae1eb473-8c76-4999-91a1-0242a242ec8a\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.119402 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-scripts\") pod \"ae1eb473-8c76-4999-91a1-0242a242ec8a\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.119493 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae1eb473-8c76-4999-91a1-0242a242ec8a-logs\") pod \"ae1eb473-8c76-4999-91a1-0242a242ec8a\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.119551 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-combined-ca-bundle\") pod \"ae1eb473-8c76-4999-91a1-0242a242ec8a\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.119666 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ae1eb473-8c76-4999-91a1-0242a242ec8a\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.119745 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r58fw\" (UniqueName: \"kubernetes.io/projected/ae1eb473-8c76-4999-91a1-0242a242ec8a-kube-api-access-r58fw\") pod \"ae1eb473-8c76-4999-91a1-0242a242ec8a\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.119868 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-config-data\") pod \"ae1eb473-8c76-4999-91a1-0242a242ec8a\" (UID: \"ae1eb473-8c76-4999-91a1-0242a242ec8a\") " Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.121944 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae1eb473-8c76-4999-91a1-0242a242ec8a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ae1eb473-8c76-4999-91a1-0242a242ec8a" (UID: "ae1eb473-8c76-4999-91a1-0242a242ec8a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.122227 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae1eb473-8c76-4999-91a1-0242a242ec8a-logs" (OuterVolumeSpecName: "logs") pod "ae1eb473-8c76-4999-91a1-0242a242ec8a" (UID: "ae1eb473-8c76-4999-91a1-0242a242ec8a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.128621 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-scripts" (OuterVolumeSpecName: "scripts") pod "ae1eb473-8c76-4999-91a1-0242a242ec8a" (UID: "ae1eb473-8c76-4999-91a1-0242a242ec8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.130449 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "ae1eb473-8c76-4999-91a1-0242a242ec8a" (UID: "ae1eb473-8c76-4999-91a1-0242a242ec8a"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.135216 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae1eb473-8c76-4999-91a1-0242a242ec8a-kube-api-access-r58fw" (OuterVolumeSpecName: "kube-api-access-r58fw") pod "ae1eb473-8c76-4999-91a1-0242a242ec8a" (UID: "ae1eb473-8c76-4999-91a1-0242a242ec8a"). InnerVolumeSpecName "kube-api-access-r58fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.166375 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae1eb473-8c76-4999-91a1-0242a242ec8a" (UID: "ae1eb473-8c76-4999-91a1-0242a242ec8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.197563 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-config-data" (OuterVolumeSpecName: "config-data") pod "ae1eb473-8c76-4999-91a1-0242a242ec8a" (UID: "ae1eb473-8c76-4999-91a1-0242a242ec8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.222960 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.222995 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae1eb473-8c76-4999-91a1-0242a242ec8a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.223003 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.223012 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae1eb473-8c76-4999-91a1-0242a242ec8a-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.223022 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1eb473-8c76-4999-91a1-0242a242ec8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.223058 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.223083 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r58fw\" (UniqueName: \"kubernetes.io/projected/ae1eb473-8c76-4999-91a1-0242a242ec8a-kube-api-access-r58fw\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.270826 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.324980 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.339199 4675 generic.go:334] "Generic (PLEG): container finished" podID="ae1eb473-8c76-4999-91a1-0242a242ec8a" containerID="6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479" exitCode=143 Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.339236 4675 generic.go:334] "Generic (PLEG): container finished" podID="ae1eb473-8c76-4999-91a1-0242a242ec8a" containerID="de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9" exitCode=143 Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.339284 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae1eb473-8c76-4999-91a1-0242a242ec8a","Type":"ContainerDied","Data":"6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479"} Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.339312 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae1eb473-8c76-4999-91a1-0242a242ec8a","Type":"ContainerDied","Data":"de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9"} Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.339321 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae1eb473-8c76-4999-91a1-0242a242ec8a","Type":"ContainerDied","Data":"b7c964e4f18c494ece37a5620f5168df2f0c4b8c05e431e3c3c9bd2b80af6e1e"} Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.339335 4675 scope.go:117] "RemoveContainer" containerID="6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.339434 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.352726 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6kxg7" event={"ID":"78391e3d-3bab-469a-a163-3729fdf23773","Type":"ContainerStarted","Data":"e0ec2d387fc60686fbe58f721a538ed520400da9c4ca44300623a5501e9f2444"} Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.356631 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"597a2793-4d69-4558-906f-ee005618985e","Type":"ContainerStarted","Data":"b519fb097b6e27977981bac5fb31b4d890802f6efdc9e48deb9f6689aa7aa5c8"} Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.492371 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.513170 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.534136 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:58:21 crc kubenswrapper[4675]: E1121 13:58:21.534821 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae1eb473-8c76-4999-91a1-0242a242ec8a" containerName="glance-log" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.534839 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1eb473-8c76-4999-91a1-0242a242ec8a" containerName="glance-log" Nov 21 13:58:21 crc kubenswrapper[4675]: E1121 13:58:21.534860 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae1eb473-8c76-4999-91a1-0242a242ec8a" containerName="glance-httpd" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.534868 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1eb473-8c76-4999-91a1-0242a242ec8a" containerName="glance-httpd" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.535120 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae1eb473-8c76-4999-91a1-0242a242ec8a" containerName="glance-httpd" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.535141 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae1eb473-8c76-4999-91a1-0242a242ec8a" containerName="glance-log" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.536601 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.543789 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.544018 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.566663 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.631582 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkvbj\" (UniqueName: \"kubernetes.io/projected/3f36a643-d8fe-4f5e-b618-e1e25914628c-kube-api-access-zkvbj\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.631692 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.631740 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.631800 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f36a643-d8fe-4f5e-b618-e1e25914628c-logs\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.631829 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.631880 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f36a643-d8fe-4f5e-b618-e1e25914628c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.631901 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.631952 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.661551 4675 scope.go:117] "RemoveContainer" containerID="de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.701782 4675 scope.go:117] "RemoveContainer" containerID="6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479" Nov 21 13:58:21 crc kubenswrapper[4675]: E1121 13:58:21.702485 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479\": container with ID starting with 6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479 not found: ID does not exist" containerID="6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.702519 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479"} err="failed to get container status \"6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479\": rpc error: code = NotFound desc = could not find container \"6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479\": container with ID starting with 6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479 not found: ID does not exist" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.702541 4675 scope.go:117] "RemoveContainer" containerID="de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9" Nov 21 13:58:21 crc kubenswrapper[4675]: E1121 13:58:21.702802 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9\": container with ID starting with de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9 not found: ID does not exist" containerID="de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.702832 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9"} err="failed to get container status \"de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9\": rpc error: code = NotFound desc = could not find container \"de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9\": container with ID starting with de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9 not found: ID does not exist" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.702853 4675 scope.go:117] "RemoveContainer" containerID="6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.703622 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479"} err="failed to get container status \"6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479\": rpc error: code = NotFound desc = could not find container \"6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479\": container with ID starting with 6d25b1875b5a542d538b66057b8efdb04f08b3cb50e4591fdbe84ba87d172479 not found: ID does not exist" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.703649 4675 scope.go:117] "RemoveContainer" containerID="de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.704691 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9"} err="failed to get container status \"de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9\": rpc error: code = NotFound desc = could not find container \"de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9\": container with ID starting with de8962d6f28eabaeabc89596f8ba64ec8690433adc6fa28878ae9f2fa3120df9 not found: ID does not exist" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.733408 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.733482 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.733534 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f36a643-d8fe-4f5e-b618-e1e25914628c-logs\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.733563 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.733603 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f36a643-d8fe-4f5e-b618-e1e25914628c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.733694 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.733761 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.733843 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkvbj\" (UniqueName: \"kubernetes.io/projected/3f36a643-d8fe-4f5e-b618-e1e25914628c-kube-api-access-zkvbj\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.734666 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f36a643-d8fe-4f5e-b618-e1e25914628c-logs\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.735385 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.735448 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f36a643-d8fe-4f5e-b618-e1e25914628c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.740198 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.749725 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.750901 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.751190 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.752726 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkvbj\" (UniqueName: \"kubernetes.io/projected/3f36a643-d8fe-4f5e-b618-e1e25914628c-kube-api-access-zkvbj\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.776331 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:58:21 crc kubenswrapper[4675]: I1121 13:58:21.864391 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 13:58:22 crc kubenswrapper[4675]: I1121 13:58:22.377656 4675 generic.go:334] "Generic (PLEG): container finished" podID="78391e3d-3bab-469a-a163-3729fdf23773" containerID="e0ec2d387fc60686fbe58f721a538ed520400da9c4ca44300623a5501e9f2444" exitCode=0 Nov 21 13:58:22 crc kubenswrapper[4675]: I1121 13:58:22.377765 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6kxg7" event={"ID":"78391e3d-3bab-469a-a163-3729fdf23773","Type":"ContainerDied","Data":"e0ec2d387fc60686fbe58f721a538ed520400da9c4ca44300623a5501e9f2444"} Nov 21 13:58:22 crc kubenswrapper[4675]: I1121 13:58:22.381456 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"597a2793-4d69-4558-906f-ee005618985e","Type":"ContainerStarted","Data":"8472e0b81aaeca11e557f8ee1319039a7c7651ab01f69f50f81e85bfac90e43f"} Nov 21 13:58:22 crc kubenswrapper[4675]: I1121 13:58:22.641773 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:58:22 crc kubenswrapper[4675]: W1121 13:58:22.651163 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f36a643_d8fe_4f5e_b618_e1e25914628c.slice/crio-05fa1b9642108acf57540f94436dd14391ce8c9e372c765ee156c081c3032e5e WatchSource:0}: Error finding container 05fa1b9642108acf57540f94436dd14391ce8c9e372c765ee156c081c3032e5e: Status 404 returned error can't find the container with id 05fa1b9642108acf57540f94436dd14391ce8c9e372c765ee156c081c3032e5e Nov 21 13:58:22 crc kubenswrapper[4675]: I1121 13:58:22.870308 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae1eb473-8c76-4999-91a1-0242a242ec8a" path="/var/lib/kubelet/pods/ae1eb473-8c76-4999-91a1-0242a242ec8a/volumes" Nov 21 13:58:23 crc kubenswrapper[4675]: I1121 13:58:23.409798 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"597a2793-4d69-4558-906f-ee005618985e","Type":"ContainerStarted","Data":"b08a2793e7f8a34083efd3edfe08bcd77a57c0b062fe7d0c64bb2c77cbe6829c"} Nov 21 13:58:23 crc kubenswrapper[4675]: I1121 13:58:23.411667 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f36a643-d8fe-4f5e-b618-e1e25914628c","Type":"ContainerStarted","Data":"a9f76b2154c6ee134725d91b1f6d072e62ab28816574c37c271fe38071471768"} Nov 21 13:58:23 crc kubenswrapper[4675]: I1121 13:58:23.411695 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f36a643-d8fe-4f5e-b618-e1e25914628c","Type":"ContainerStarted","Data":"05fa1b9642108acf57540f94436dd14391ce8c9e372c765ee156c081c3032e5e"} Nov 21 13:58:23 crc kubenswrapper[4675]: I1121 13:58:23.444786 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.444740714 podStartE2EDuration="4.444740714s" podCreationTimestamp="2025-11-21 13:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:23.443975794 +0000 UTC m=+1580.170390541" watchObservedRunningTime="2025-11-21 13:58:23.444740714 +0000 UTC m=+1580.171155441" Nov 21 13:58:24 crc kubenswrapper[4675]: I1121 13:58:24.436126 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4" containerID="2e4a3d5ac8e49d5a151ff61fc91301ab58ef7aa6799a5793a2491c3d0dabce64" exitCode=0 Nov 21 13:58:24 crc kubenswrapper[4675]: I1121 13:58:24.436284 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2jx6q" event={"ID":"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4","Type":"ContainerDied","Data":"2e4a3d5ac8e49d5a151ff61fc91301ab58ef7aa6799a5793a2491c3d0dabce64"} Nov 21 13:58:27 crc kubenswrapper[4675]: I1121 13:58:27.473136 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2jx6q" event={"ID":"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4","Type":"ContainerDied","Data":"404db1807f765a07981bb2285fab5fe1c2bf33b384ac4e031196b4d9d286357e"} Nov 21 13:58:27 crc kubenswrapper[4675]: I1121 13:58:27.473669 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="404db1807f765a07981bb2285fab5fe1c2bf33b384ac4e031196b4d9d286357e" Nov 21 13:58:27 crc kubenswrapper[4675]: I1121 13:58:27.477301 4675 generic.go:334] "Generic (PLEG): container finished" podID="b87d2cb3-6cf6-4f8e-ad16-021304428c63" containerID="ed5746704b11be7c1bd3f0297c0df88a6c6f02bce1594d2897c14072fa3763fe" exitCode=0 Nov 21 13:58:27 crc kubenswrapper[4675]: I1121 13:58:27.477349 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-g725b" event={"ID":"b87d2cb3-6cf6-4f8e-ad16-021304428c63","Type":"ContainerDied","Data":"ed5746704b11be7c1bd3f0297c0df88a6c6f02bce1594d2897c14072fa3763fe"} Nov 21 13:58:27 crc kubenswrapper[4675]: I1121 13:58:27.694681 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2jx6q" Nov 21 13:58:27 crc kubenswrapper[4675]: I1121 13:58:27.826778 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-combined-ca-bundle\") pod \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\" (UID: \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\") " Nov 21 13:58:27 crc kubenswrapper[4675]: I1121 13:58:27.826866 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rnvt\" (UniqueName: \"kubernetes.io/projected/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-kube-api-access-5rnvt\") pod \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\" (UID: \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\") " Nov 21 13:58:27 crc kubenswrapper[4675]: I1121 13:58:27.826956 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-config\") pod \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\" (UID: \"f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4\") " Nov 21 13:58:27 crc kubenswrapper[4675]: I1121 13:58:27.830744 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-kube-api-access-5rnvt" (OuterVolumeSpecName: "kube-api-access-5rnvt") pod "f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4" (UID: "f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4"). InnerVolumeSpecName "kube-api-access-5rnvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:27 crc kubenswrapper[4675]: I1121 13:58:27.854146 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-config" (OuterVolumeSpecName: "config") pod "f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4" (UID: "f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:27 crc kubenswrapper[4675]: I1121 13:58:27.857032 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4" (UID: "f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:27 crc kubenswrapper[4675]: I1121 13:58:27.930115 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rnvt\" (UniqueName: \"kubernetes.io/projected/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-kube-api-access-5rnvt\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:27 crc kubenswrapper[4675]: I1121 13:58:27.930412 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:27 crc kubenswrapper[4675]: I1121 13:58:27.930423 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.493372 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87a33291-326b-4010-a851-ec2d41c8a754","Type":"ContainerStarted","Data":"f0595c9aa871172606b2199a4303c0bfa0ff490f6d45dcaf0c162699ace1059b"} Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.496015 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6kxg7" event={"ID":"78391e3d-3bab-469a-a163-3729fdf23773","Type":"ContainerStarted","Data":"09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7"} Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.501661 4675 generic.go:334] "Generic (PLEG): container finished" podID="fc50afd7-32f7-4d99-9952-81186547313e" containerID="2fda938354f9b7d247dba067271f91c3707a67f1ee2c440294f91b767b9b1acf" exitCode=0 Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.501736 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wgt2v" event={"ID":"fc50afd7-32f7-4d99-9952-81186547313e","Type":"ContainerDied","Data":"2fda938354f9b7d247dba067271f91c3707a67f1ee2c440294f91b767b9b1acf"} Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.504246 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-n65cw" event={"ID":"baacdfb7-787a-462a-8102-472a47283224","Type":"ContainerStarted","Data":"2f21b9737b3dc310046ef45cb62c0bfa007fee44b01d7166c51d06bba472e35b"} Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.506631 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2jx6q" Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.506680 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f36a643-d8fe-4f5e-b618-e1e25914628c","Type":"ContainerStarted","Data":"60cad6006b01a90cf51d23a40f106d8666a0d3eaddd388aff83fb51c8c7fd079"} Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.522794 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6kxg7" podStartSLOduration=29.320197189 podStartE2EDuration="36.522777996s" podCreationTimestamp="2025-11-21 13:57:52 +0000 UTC" firstStartedPulling="2025-11-21 13:58:20.297373482 +0000 UTC m=+1577.023788209" lastFinishedPulling="2025-11-21 13:58:27.499954289 +0000 UTC m=+1584.226369016" observedRunningTime="2025-11-21 13:58:28.52171667 +0000 UTC m=+1585.248131397" watchObservedRunningTime="2025-11-21 13:58:28.522777996 +0000 UTC m=+1585.249192713" Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.557297 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.557274238 podStartE2EDuration="7.557274238s" podCreationTimestamp="2025-11-21 13:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:28.545159166 +0000 UTC m=+1585.271573893" watchObservedRunningTime="2025-11-21 13:58:28.557274238 +0000 UTC m=+1585.283688975" Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.568707 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-n65cw" podStartSLOduration=3.079674387 podStartE2EDuration="48.568687633s" podCreationTimestamp="2025-11-21 13:57:40 +0000 UTC" firstStartedPulling="2025-11-21 13:57:42.084976993 +0000 UTC m=+1538.811391720" lastFinishedPulling="2025-11-21 13:58:27.573990239 +0000 UTC m=+1584.300404966" observedRunningTime="2025-11-21 13:58:28.561186506 +0000 UTC m=+1585.287601253" watchObservedRunningTime="2025-11-21 13:58:28.568687633 +0000 UTC m=+1585.295102360" Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.863521 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-g725b" Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.955897 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-scripts\") pod \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.955996 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b87d2cb3-6cf6-4f8e-ad16-021304428c63-logs\") pod \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.956044 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-config-data\") pod \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.956120 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kqp6\" (UniqueName: \"kubernetes.io/projected/b87d2cb3-6cf6-4f8e-ad16-021304428c63-kube-api-access-7kqp6\") pod \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.956157 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-combined-ca-bundle\") pod \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\" (UID: \"b87d2cb3-6cf6-4f8e-ad16-021304428c63\") " Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.960088 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-xh6kv"] Nov 21 13:58:28 crc kubenswrapper[4675]: E1121 13:58:28.960614 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4" containerName="neutron-db-sync" Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.960842 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4" containerName="neutron-db-sync" Nov 21 13:58:28 crc kubenswrapper[4675]: E1121 13:58:28.960895 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87d2cb3-6cf6-4f8e-ad16-021304428c63" containerName="placement-db-sync" Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.960904 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87d2cb3-6cf6-4f8e-ad16-021304428c63" containerName="placement-db-sync" Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.961188 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b87d2cb3-6cf6-4f8e-ad16-021304428c63" containerName="placement-db-sync" Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.961242 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4" containerName="neutron-db-sync" Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.961867 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b87d2cb3-6cf6-4f8e-ad16-021304428c63-logs" (OuterVolumeSpecName: "logs") pod "b87d2cb3-6cf6-4f8e-ad16-021304428c63" (UID: "b87d2cb3-6cf6-4f8e-ad16-021304428c63"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.969609 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.972656 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87d2cb3-6cf6-4f8e-ad16-021304428c63-kube-api-access-7kqp6" (OuterVolumeSpecName: "kube-api-access-7kqp6") pod "b87d2cb3-6cf6-4f8e-ad16-021304428c63" (UID: "b87d2cb3-6cf6-4f8e-ad16-021304428c63"). InnerVolumeSpecName "kube-api-access-7kqp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:28 crc kubenswrapper[4675]: I1121 13:58:28.979247 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-scripts" (OuterVolumeSpecName: "scripts") pod "b87d2cb3-6cf6-4f8e-ad16-021304428c63" (UID: "b87d2cb3-6cf6-4f8e-ad16-021304428c63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.031118 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-xh6kv"] Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.034941 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b87d2cb3-6cf6-4f8e-ad16-021304428c63" (UID: "b87d2cb3-6cf6-4f8e-ad16-021304428c63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.047787 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-config-data" (OuterVolumeSpecName: "config-data") pod "b87d2cb3-6cf6-4f8e-ad16-021304428c63" (UID: "b87d2cb3-6cf6-4f8e-ad16-021304428c63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.061450 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.061538 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7bqm\" (UniqueName: \"kubernetes.io/projected/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-kube-api-access-x7bqm\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.061588 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.061682 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-dns-svc\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.061709 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-config\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.061779 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.061840 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.061849 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b87d2cb3-6cf6-4f8e-ad16-021304428c63-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.061857 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.061865 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kqp6\" (UniqueName: \"kubernetes.io/projected/b87d2cb3-6cf6-4f8e-ad16-021304428c63-kube-api-access-7kqp6\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.061875 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87d2cb3-6cf6-4f8e-ad16-021304428c63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.084036 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58fb49d57b-hpfw5"] Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.086352 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.090655 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.091293 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.091509 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.091699 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2bmhb" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.108960 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58fb49d57b-hpfw5"] Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.164702 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.164796 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-httpd-config\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.164815 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-ovndb-tls-certs\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.164973 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7bqm\" (UniqueName: \"kubernetes.io/projected/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-kube-api-access-x7bqm\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.165099 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.165216 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j54zc\" (UniqueName: \"kubernetes.io/projected/5e9e808c-bda6-4e6d-bd55-d94d426574c3-kube-api-access-j54zc\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.165265 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-dns-svc\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.165297 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-config\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.165379 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-combined-ca-bundle\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.165466 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.165526 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-config\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.167141 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.170825 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.170852 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.171720 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-config\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.185875 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-dns-svc\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.188583 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7bqm\" (UniqueName: \"kubernetes.io/projected/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-kube-api-access-x7bqm\") pod \"dnsmasq-dns-6b7b667979-xh6kv\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.267560 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-config\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.267663 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-httpd-config\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.267693 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-ovndb-tls-certs\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.267851 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j54zc\" (UniqueName: \"kubernetes.io/projected/5e9e808c-bda6-4e6d-bd55-d94d426574c3-kube-api-access-j54zc\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.267946 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-combined-ca-bundle\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.273677 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-config\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.277752 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-httpd-config\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.287209 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-ovndb-tls-certs\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.287992 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-combined-ca-bundle\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.290043 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j54zc\" (UniqueName: \"kubernetes.io/projected/5e9e808c-bda6-4e6d-bd55-d94d426574c3-kube-api-access-j54zc\") pod \"neutron-58fb49d57b-hpfw5\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.415499 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.425105 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.540443 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-g725b" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.543933 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-g725b" event={"ID":"b87d2cb3-6cf6-4f8e-ad16-021304428c63","Type":"ContainerDied","Data":"d82940738edf47a2e15a7bae73b233f0cb99331e3c630cef30ec3fdadae8a7d1"} Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.543981 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d82940738edf47a2e15a7bae73b233f0cb99331e3c630cef30ec3fdadae8a7d1" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.621890 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-64487ff74-sfh5j"] Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.626999 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.629319 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.631318 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7sbnn" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.631581 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.631736 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.631892 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.654223 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64487ff74-sfh5j"] Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.778413 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-internal-tls-certs\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.778777 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-config-data\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.778975 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ff36b45-ea40-44fd-84fe-dc732a5af439-logs\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.779054 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7dql\" (UniqueName: \"kubernetes.io/projected/2ff36b45-ea40-44fd-84fe-dc732a5af439-kube-api-access-t7dql\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.779221 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-combined-ca-bundle\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.779323 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-public-tls-certs\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.779425 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-scripts\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.882073 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-combined-ca-bundle\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.882401 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-public-tls-certs\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.882456 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-scripts\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.882502 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-internal-tls-certs\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.882625 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-config-data\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.882701 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ff36b45-ea40-44fd-84fe-dc732a5af439-logs\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.882730 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7dql\" (UniqueName: \"kubernetes.io/projected/2ff36b45-ea40-44fd-84fe-dc732a5af439-kube-api-access-t7dql\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.890832 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ff36b45-ea40-44fd-84fe-dc732a5af439-logs\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.900909 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-config-data\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.903811 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-scripts\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.904085 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-internal-tls-certs\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.908426 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7dql\" (UniqueName: \"kubernetes.io/projected/2ff36b45-ea40-44fd-84fe-dc732a5af439-kube-api-access-t7dql\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.913358 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-public-tls-certs\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:29 crc kubenswrapper[4675]: I1121 13:58:29.932831 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff36b45-ea40-44fd-84fe-dc732a5af439-combined-ca-bundle\") pod \"placement-64487ff74-sfh5j\" (UID: \"2ff36b45-ea40-44fd-84fe-dc732a5af439\") " pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:30 crc kubenswrapper[4675]: I1121 13:58:30.044831 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:30 crc kubenswrapper[4675]: I1121 13:58:30.122600 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 13:58:30 crc kubenswrapper[4675]: I1121 13:58:30.122713 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 13:58:30 crc kubenswrapper[4675]: I1121 13:58:30.249820 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 13:58:30 crc kubenswrapper[4675]: I1121 13:58:30.257774 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 13:58:30 crc kubenswrapper[4675]: I1121 13:58:30.266895 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-xh6kv"] Nov 21 13:58:30 crc kubenswrapper[4675]: I1121 13:58:30.339958 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58fb49d57b-hpfw5"] Nov 21 13:58:30 crc kubenswrapper[4675]: I1121 13:58:30.552963 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 13:58:30 crc kubenswrapper[4675]: I1121 13:58:30.553366 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.373884 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.389757 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dbf4b8f9c-tnln2"] Nov 21 13:58:31 crc kubenswrapper[4675]: E1121 13:58:31.390615 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc50afd7-32f7-4d99-9952-81186547313e" containerName="keystone-bootstrap" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.390635 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc50afd7-32f7-4d99-9952-81186547313e" containerName="keystone-bootstrap" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.401929 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc50afd7-32f7-4d99-9952-81186547313e" containerName="keystone-bootstrap" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.409678 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.412249 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.413080 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.438599 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-fernet-keys\") pod \"fc50afd7-32f7-4d99-9952-81186547313e\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.438721 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-combined-ca-bundle\") pod \"fc50afd7-32f7-4d99-9952-81186547313e\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.438793 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jpdj\" (UniqueName: \"kubernetes.io/projected/fc50afd7-32f7-4d99-9952-81186547313e-kube-api-access-6jpdj\") pod \"fc50afd7-32f7-4d99-9952-81186547313e\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.438865 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-config-data\") pod \"fc50afd7-32f7-4d99-9952-81186547313e\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.438984 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-credential-keys\") pod \"fc50afd7-32f7-4d99-9952-81186547313e\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.439059 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-scripts\") pod \"fc50afd7-32f7-4d99-9952-81186547313e\" (UID: \"fc50afd7-32f7-4d99-9952-81186547313e\") " Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.440119 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-public-tls-certs\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.443952 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-ovndb-tls-certs\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.444197 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-httpd-config\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.449623 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-internal-tls-certs\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.449811 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-config\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.450005 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-combined-ca-bundle\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.450084 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7tn6\" (UniqueName: \"kubernetes.io/projected/7d084a12-d301-4ea1-b049-ca6211a8929d-kube-api-access-m7tn6\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.458631 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dbf4b8f9c-tnln2"] Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.462323 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fc50afd7-32f7-4d99-9952-81186547313e" (UID: "fc50afd7-32f7-4d99-9952-81186547313e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.463192 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-scripts" (OuterVolumeSpecName: "scripts") pod "fc50afd7-32f7-4d99-9952-81186547313e" (UID: "fc50afd7-32f7-4d99-9952-81186547313e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.463708 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc50afd7-32f7-4d99-9952-81186547313e-kube-api-access-6jpdj" (OuterVolumeSpecName: "kube-api-access-6jpdj") pod "fc50afd7-32f7-4d99-9952-81186547313e" (UID: "fc50afd7-32f7-4d99-9952-81186547313e"). InnerVolumeSpecName "kube-api-access-6jpdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.465277 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fc50afd7-32f7-4d99-9952-81186547313e" (UID: "fc50afd7-32f7-4d99-9952-81186547313e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.502075 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-config-data" (OuterVolumeSpecName: "config-data") pod "fc50afd7-32f7-4d99-9952-81186547313e" (UID: "fc50afd7-32f7-4d99-9952-81186547313e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.515405 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc50afd7-32f7-4d99-9952-81186547313e" (UID: "fc50afd7-32f7-4d99-9952-81186547313e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.537325 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64487ff74-sfh5j"] Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.555983 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-internal-tls-certs\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.556136 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-config\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.556208 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-combined-ca-bundle\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.556233 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7tn6\" (UniqueName: \"kubernetes.io/projected/7d084a12-d301-4ea1-b049-ca6211a8929d-kube-api-access-m7tn6\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.556290 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-public-tls-certs\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.556311 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-ovndb-tls-certs\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.556348 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-httpd-config\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.556401 4675 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.556412 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.556420 4675 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.556427 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.556436 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jpdj\" (UniqueName: \"kubernetes.io/projected/fc50afd7-32f7-4d99-9952-81186547313e-kube-api-access-6jpdj\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.556446 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc50afd7-32f7-4d99-9952-81186547313e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.565623 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-config\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.565722 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-httpd-config\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.567003 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-ovndb-tls-certs\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.570799 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-public-tls-certs\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.571725 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-combined-ca-bundle\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.574755 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d084a12-d301-4ea1-b049-ca6211a8929d-internal-tls-certs\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.575121 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7tn6\" (UniqueName: \"kubernetes.io/projected/7d084a12-d301-4ea1-b049-ca6211a8929d-kube-api-access-m7tn6\") pod \"neutron-dbf4b8f9c-tnln2\" (UID: \"7d084a12-d301-4ea1-b049-ca6211a8929d\") " pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.599282 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" event={"ID":"63c41720-bda4-42fd-a9e0-f07fa4d0ea64","Type":"ContainerStarted","Data":"062484bc412adfbb0a9f4b292e17ff157506f4370cb6bd91dcc881593b7508c6"} Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.601725 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wgt2v" event={"ID":"fc50afd7-32f7-4d99-9952-81186547313e","Type":"ContainerDied","Data":"7ac9672dbc43814032825256b23696b337a2d2d45e94aaddc4734363af80887a"} Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.601752 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ac9672dbc43814032825256b23696b337a2d2d45e94aaddc4734363af80887a" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.601810 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wgt2v" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.607675 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58fb49d57b-hpfw5" event={"ID":"5e9e808c-bda6-4e6d-bd55-d94d426574c3","Type":"ContainerStarted","Data":"5a1a475c9a2dc752935b32864a4fd78b5d4479bc04bba7368b30ec1412297118"} Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.798148 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.870974 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.871019 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.954898 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 13:58:31 crc kubenswrapper[4675]: I1121 13:58:31.977647 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.594280 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dbf4b8f9c-tnln2"] Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.657390 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7ff9b4b9fd-shbm9"] Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.659408 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.662670 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.662825 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.662850 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.663727 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.663887 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.668451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58fb49d57b-hpfw5" event={"ID":"5e9e808c-bda6-4e6d-bd55-d94d426574c3","Type":"ContainerStarted","Data":"6be953ebc702dbb085f2794e1ff331a85dc7e242bcd41aae2ced637e99f4cba6"} Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.668487 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58fb49d57b-hpfw5" event={"ID":"5e9e808c-bda6-4e6d-bd55-d94d426574c3","Type":"ContainerStarted","Data":"0afd3aed69fe00b4a8898f6932f4eac85e152ed13548de4023674c9b51fce34d"} Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.674762 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.678027 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tzj4r" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.695623 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-public-tls-certs\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.695730 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-scripts\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.695758 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-credential-keys\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.695816 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-config-data\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.695925 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-internal-tls-certs\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.695961 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8xqt\" (UniqueName: \"kubernetes.io/projected/f494051d-de96-4044-a28d-3b05672b5a66-kube-api-access-s8xqt\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.696016 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-fernet-keys\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.696105 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-combined-ca-bundle\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.701657 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64487ff74-sfh5j" event={"ID":"2ff36b45-ea40-44fd-84fe-dc732a5af439","Type":"ContainerStarted","Data":"735b087163ef3cb5be26603f6010281acf7f09461b34cf1d20daa3c40c971b05"} Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.701713 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64487ff74-sfh5j" event={"ID":"2ff36b45-ea40-44fd-84fe-dc732a5af439","Type":"ContainerStarted","Data":"2dfbe2424bfebd3513da1272cac8c1eba13db696fc52f5fc2c6aa5dbe85b6392"} Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.701729 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64487ff74-sfh5j" event={"ID":"2ff36b45-ea40-44fd-84fe-dc732a5af439","Type":"ContainerStarted","Data":"556aec65ed5f293bdf7ed7a7ab9840b33c04a41e9ec869d921d3fdda398fa9bf"} Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.702621 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.702681 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.710594 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7ff9b4b9fd-shbm9"] Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.719527 4675 generic.go:334] "Generic (PLEG): container finished" podID="63c41720-bda4-42fd-a9e0-f07fa4d0ea64" containerID="eefed8a498db59cf14dd3a26285014749670b93d3a0d86b0eb9778dff3c7ab53" exitCode=0 Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.719762 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" event={"ID":"63c41720-bda4-42fd-a9e0-f07fa4d0ea64","Type":"ContainerDied","Data":"eefed8a498db59cf14dd3a26285014749670b93d3a0d86b0eb9778dff3c7ab53"} Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.720696 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.720745 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.722735 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-58fb49d57b-hpfw5" podStartSLOduration=3.722722359 podStartE2EDuration="3.722722359s" podCreationTimestamp="2025-11-21 13:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:32.701469608 +0000 UTC m=+1589.427884345" watchObservedRunningTime="2025-11-21 13:58:32.722722359 +0000 UTC m=+1589.449137096" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.749221 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-64487ff74-sfh5j" podStartSLOduration=3.7491996800000003 podStartE2EDuration="3.74919968s" podCreationTimestamp="2025-11-21 13:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:32.734734489 +0000 UTC m=+1589.461149226" watchObservedRunningTime="2025-11-21 13:58:32.74919968 +0000 UTC m=+1589.475614407" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.797909 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-public-tls-certs\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.798010 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-scripts\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.798039 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-credential-keys\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.798129 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-config-data\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.798241 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-internal-tls-certs\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.798281 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8xqt\" (UniqueName: \"kubernetes.io/projected/f494051d-de96-4044-a28d-3b05672b5a66-kube-api-access-s8xqt\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.801033 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-fernet-keys\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.801273 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-combined-ca-bundle\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.805803 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-public-tls-certs\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.806514 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-fernet-keys\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.808780 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-config-data\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.809043 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-credential-keys\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.809314 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-combined-ca-bundle\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.811733 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-internal-tls-certs\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.816205 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f494051d-de96-4044-a28d-3b05672b5a66-scripts\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.821325 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8xqt\" (UniqueName: \"kubernetes.io/projected/f494051d-de96-4044-a28d-3b05672b5a66-kube-api-access-s8xqt\") pod \"keystone-7ff9b4b9fd-shbm9\" (UID: \"f494051d-de96-4044-a28d-3b05672b5a66\") " pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:32 crc kubenswrapper[4675]: I1121 13:58:32.993530 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:33 crc kubenswrapper[4675]: W1121 13:58:33.120272 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d084a12_d301_4ea1_b049_ca6211a8929d.slice/crio-43b73c07c201d8e1eb98b3a2eb24f5650521b07337c8c9b5d492e76e8d32995f WatchSource:0}: Error finding container 43b73c07c201d8e1eb98b3a2eb24f5650521b07337c8c9b5d492e76e8d32995f: Status 404 returned error can't find the container with id 43b73c07c201d8e1eb98b3a2eb24f5650521b07337c8c9b5d492e76e8d32995f Nov 21 13:58:33 crc kubenswrapper[4675]: I1121 13:58:33.347338 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:58:33 crc kubenswrapper[4675]: I1121 13:58:33.348267 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:58:33 crc kubenswrapper[4675]: I1121 13:58:33.744202 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbf4b8f9c-tnln2" event={"ID":"7d084a12-d301-4ea1-b049-ca6211a8929d","Type":"ContainerStarted","Data":"43b73c07c201d8e1eb98b3a2eb24f5650521b07337c8c9b5d492e76e8d32995f"} Nov 21 13:58:33 crc kubenswrapper[4675]: I1121 13:58:33.747984 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w28m5" event={"ID":"d8406cb5-f871-4355-811c-7090afd8aa2e","Type":"ContainerStarted","Data":"22ede8ee41a76ab5eec6d7f27028d967e19f6633224401e6810cb5f13928bcda"} Nov 21 13:58:33 crc kubenswrapper[4675]: I1121 13:58:33.751576 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7ff9b4b9fd-shbm9"] Nov 21 13:58:33 crc kubenswrapper[4675]: W1121 13:58:33.755006 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf494051d_de96_4044_a28d_3b05672b5a66.slice/crio-898d6dbc8c789428c745358312e4a97baf466d91ae216a1d51b153101bde02be WatchSource:0}: Error finding container 898d6dbc8c789428c745358312e4a97baf466d91ae216a1d51b153101bde02be: Status 404 returned error can't find the container with id 898d6dbc8c789428c745358312e4a97baf466d91ae216a1d51b153101bde02be Nov 21 13:58:34 crc kubenswrapper[4675]: I1121 13:58:34.428634 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6kxg7" podUID="78391e3d-3bab-469a-a163-3729fdf23773" containerName="registry-server" probeResult="failure" output=< Nov 21 13:58:34 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 13:58:34 crc kubenswrapper[4675]: > Nov 21 13:58:34 crc kubenswrapper[4675]: I1121 13:58:34.761953 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7ff9b4b9fd-shbm9" event={"ID":"f494051d-de96-4044-a28d-3b05672b5a66","Type":"ContainerStarted","Data":"edd3e71c027164f8ab584de8a0e25af9497ce6cbac2d13eae8d8afe59fac7477"} Nov 21 13:58:34 crc kubenswrapper[4675]: I1121 13:58:34.761994 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7ff9b4b9fd-shbm9" event={"ID":"f494051d-de96-4044-a28d-3b05672b5a66","Type":"ContainerStarted","Data":"898d6dbc8c789428c745358312e4a97baf466d91ae216a1d51b153101bde02be"} Nov 21 13:58:34 crc kubenswrapper[4675]: I1121 13:58:34.762120 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:58:34 crc kubenswrapper[4675]: I1121 13:58:34.764451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbf4b8f9c-tnln2" event={"ID":"7d084a12-d301-4ea1-b049-ca6211a8929d","Type":"ContainerStarted","Data":"f05be9ddbf6e5246235615cbcaabdbd1d1810e77114a508f1544595fa5cea6b2"} Nov 21 13:58:34 crc kubenswrapper[4675]: I1121 13:58:34.766232 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ljvgq" event={"ID":"9ef50e12-86e6-4c25-b99e-4fc6506d3890","Type":"ContainerStarted","Data":"d87f3e031452ac6493d5a8f97e1741eb98eae6e086bc9b8294ffedbb54990093"} Nov 21 13:58:34 crc kubenswrapper[4675]: I1121 13:58:34.768566 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" event={"ID":"63c41720-bda4-42fd-a9e0-f07fa4d0ea64","Type":"ContainerStarted","Data":"cdb352bd37c83e48f714633f6fa437d61bad80cfc13d8528762f511df58428d3"} Nov 21 13:58:34 crc kubenswrapper[4675]: I1121 13:58:34.768622 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:58:34 crc kubenswrapper[4675]: I1121 13:58:34.769174 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:34 crc kubenswrapper[4675]: I1121 13:58:34.787839 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-w28m5" podStartSLOduration=4.877471347 podStartE2EDuration="54.787817428s" podCreationTimestamp="2025-11-21 13:57:40 +0000 UTC" firstStartedPulling="2025-11-21 13:57:42.365767058 +0000 UTC m=+1539.092181785" lastFinishedPulling="2025-11-21 13:58:32.276113139 +0000 UTC m=+1589.002527866" observedRunningTime="2025-11-21 13:58:33.766124779 +0000 UTC m=+1590.492539506" watchObservedRunningTime="2025-11-21 13:58:34.787817428 +0000 UTC m=+1591.514232155" Nov 21 13:58:34 crc kubenswrapper[4675]: I1121 13:58:34.799184 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7ff9b4b9fd-shbm9" podStartSLOduration=2.799160571 podStartE2EDuration="2.799160571s" podCreationTimestamp="2025-11-21 13:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:34.795853579 +0000 UTC m=+1591.522268316" watchObservedRunningTime="2025-11-21 13:58:34.799160571 +0000 UTC m=+1591.525575298" Nov 21 13:58:34 crc kubenswrapper[4675]: I1121 13:58:34.816007 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-ljvgq" podStartSLOduration=4.21293813 podStartE2EDuration="54.815984551s" podCreationTimestamp="2025-11-21 13:57:40 +0000 UTC" firstStartedPulling="2025-11-21 13:57:42.572714998 +0000 UTC m=+1539.299129725" lastFinishedPulling="2025-11-21 13:58:33.175761419 +0000 UTC m=+1589.902176146" observedRunningTime="2025-11-21 13:58:34.813450888 +0000 UTC m=+1591.539865615" watchObservedRunningTime="2025-11-21 13:58:34.815984551 +0000 UTC m=+1591.542399278" Nov 21 13:58:34 crc kubenswrapper[4675]: I1121 13:58:34.838895 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" podStartSLOduration=6.838871053 podStartE2EDuration="6.838871053s" podCreationTimestamp="2025-11-21 13:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:34.836488694 +0000 UTC m=+1591.562903431" watchObservedRunningTime="2025-11-21 13:58:34.838871053 +0000 UTC m=+1591.565285780" Nov 21 13:58:36 crc kubenswrapper[4675]: I1121 13:58:36.801921 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbf4b8f9c-tnln2" event={"ID":"7d084a12-d301-4ea1-b049-ca6211a8929d","Type":"ContainerStarted","Data":"343771c5c635275104a8abe8519327b8c9e1dee8dc719c2d9fc0dcb9bbcfbfa3"} Nov 21 13:58:37 crc kubenswrapper[4675]: I1121 13:58:37.812283 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:58:37 crc kubenswrapper[4675]: I1121 13:58:37.846566 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dbf4b8f9c-tnln2" podStartSLOduration=6.846546555 podStartE2EDuration="6.846546555s" podCreationTimestamp="2025-11-21 13:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:37.846007281 +0000 UTC m=+1594.572422028" watchObservedRunningTime="2025-11-21 13:58:37.846546555 +0000 UTC m=+1594.572961282" Nov 21 13:58:39 crc kubenswrapper[4675]: I1121 13:58:39.417304 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:58:39 crc kubenswrapper[4675]: I1121 13:58:39.534577 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-jpn8w"] Nov 21 13:58:39 crc kubenswrapper[4675]: I1121 13:58:39.534925 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" podUID="ef666cb3-3002-47d0-9aec-c53581c6a688" containerName="dnsmasq-dns" containerID="cri-o://5e284026728b5388c4a81324a90c2445da35d4d6643f6044efd26eeb49c2b5e8" gracePeriod=10 Nov 21 13:58:40 crc kubenswrapper[4675]: I1121 13:58:40.138264 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 13:58:40 crc kubenswrapper[4675]: I1121 13:58:40.138579 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:58:40 crc kubenswrapper[4675]: I1121 13:58:40.138960 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 13:58:40 crc kubenswrapper[4675]: I1121 13:58:40.150102 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 13:58:40 crc kubenswrapper[4675]: I1121 13:58:40.150306 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:58:40 crc kubenswrapper[4675]: I1121 13:58:40.156648 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 13:58:40 crc kubenswrapper[4675]: I1121 13:58:40.888908 4675 generic.go:334] "Generic (PLEG): container finished" podID="ef666cb3-3002-47d0-9aec-c53581c6a688" containerID="5e284026728b5388c4a81324a90c2445da35d4d6643f6044efd26eeb49c2b5e8" exitCode=0 Nov 21 13:58:40 crc kubenswrapper[4675]: I1121 13:58:40.889348 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" event={"ID":"ef666cb3-3002-47d0-9aec-c53581c6a688","Type":"ContainerDied","Data":"5e284026728b5388c4a81324a90c2445da35d4d6643f6044efd26eeb49c2b5e8"} Nov 21 13:58:42 crc kubenswrapper[4675]: I1121 13:58:42.850358 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:58:42 crc kubenswrapper[4675]: I1121 13:58:42.919681 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" Nov 21 13:58:42 crc kubenswrapper[4675]: I1121 13:58:42.961229 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-dns-swift-storage-0\") pod \"ef666cb3-3002-47d0-9aec-c53581c6a688\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " Nov 21 13:58:42 crc kubenswrapper[4675]: I1121 13:58:42.961298 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-dns-svc\") pod \"ef666cb3-3002-47d0-9aec-c53581c6a688\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " Nov 21 13:58:42 crc kubenswrapper[4675]: I1121 13:58:42.961322 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-ovsdbserver-sb\") pod \"ef666cb3-3002-47d0-9aec-c53581c6a688\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " Nov 21 13:58:42 crc kubenswrapper[4675]: I1121 13:58:42.961469 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-config\") pod \"ef666cb3-3002-47d0-9aec-c53581c6a688\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " Nov 21 13:58:42 crc kubenswrapper[4675]: I1121 13:58:42.961500 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgj62\" (UniqueName: \"kubernetes.io/projected/ef666cb3-3002-47d0-9aec-c53581c6a688-kube-api-access-vgj62\") pod \"ef666cb3-3002-47d0-9aec-c53581c6a688\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " Nov 21 13:58:42 crc kubenswrapper[4675]: I1121 13:58:42.961646 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-ovsdbserver-nb\") pod \"ef666cb3-3002-47d0-9aec-c53581c6a688\" (UID: \"ef666cb3-3002-47d0-9aec-c53581c6a688\") " Nov 21 13:58:42 crc kubenswrapper[4675]: I1121 13:58:42.962711 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" event={"ID":"ef666cb3-3002-47d0-9aec-c53581c6a688","Type":"ContainerDied","Data":"4949a70345aba9b50166738f4d9edbfe596eedcc0a31da56293878c7b59acbfe"} Nov 21 13:58:42 crc kubenswrapper[4675]: I1121 13:58:42.962766 4675 scope.go:117] "RemoveContainer" containerID="5e284026728b5388c4a81324a90c2445da35d4d6643f6044efd26eeb49c2b5e8" Nov 21 13:58:42 crc kubenswrapper[4675]: I1121 13:58:42.966337 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef666cb3-3002-47d0-9aec-c53581c6a688-kube-api-access-vgj62" (OuterVolumeSpecName: "kube-api-access-vgj62") pod "ef666cb3-3002-47d0-9aec-c53581c6a688" (UID: "ef666cb3-3002-47d0-9aec-c53581c6a688"). InnerVolumeSpecName "kube-api-access-vgj62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.023372 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ef666cb3-3002-47d0-9aec-c53581c6a688" (UID: "ef666cb3-3002-47d0-9aec-c53581c6a688"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.029315 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef666cb3-3002-47d0-9aec-c53581c6a688" (UID: "ef666cb3-3002-47d0-9aec-c53581c6a688"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.032988 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef666cb3-3002-47d0-9aec-c53581c6a688" (UID: "ef666cb3-3002-47d0-9aec-c53581c6a688"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.036243 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-config" (OuterVolumeSpecName: "config") pod "ef666cb3-3002-47d0-9aec-c53581c6a688" (UID: "ef666cb3-3002-47d0-9aec-c53581c6a688"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.047177 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef666cb3-3002-47d0-9aec-c53581c6a688" (UID: "ef666cb3-3002-47d0-9aec-c53581c6a688"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.065197 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.065240 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgj62\" (UniqueName: \"kubernetes.io/projected/ef666cb3-3002-47d0-9aec-c53581c6a688-kube-api-access-vgj62\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.065259 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.065273 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.065288 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.065300 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef666cb3-3002-47d0-9aec-c53581c6a688-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.163248 4675 scope.go:117] "RemoveContainer" containerID="e98496fc2fabb24b559772bb16ee79109e85c6fb8a07f7b3cf9008141fd62f98" Nov 21 13:58:43 crc kubenswrapper[4675]: W1121 13:58:43.257307 4675 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef666cb3_3002_47d0_9aec_c53581c6a688.slice/pids.max": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef666cb3_3002_47d0_9aec_c53581c6a688.slice/pids.max: no such device Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.294492 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-jpn8w"] Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.304198 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-jpn8w"] Nov 21 13:58:43 crc kubenswrapper[4675]: E1121 13:58:43.335507 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="87a33291-326b-4010-a851-ec2d41c8a754" Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.937420 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87a33291-326b-4010-a851-ec2d41c8a754" containerName="ceilometer-notification-agent" containerID="cri-o://429065759d82e3c2cb6911f24413045efeca9cd4957254ad1098e3f6511b963c" gracePeriod=30 Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.938209 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87a33291-326b-4010-a851-ec2d41c8a754","Type":"ContainerStarted","Data":"6492a53bd1ca898e9554f015a8404ce9fd6edaf2f076d280eb351d2433ba7618"} Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.938486 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.938567 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87a33291-326b-4010-a851-ec2d41c8a754" containerName="proxy-httpd" containerID="cri-o://6492a53bd1ca898e9554f015a8404ce9fd6edaf2f076d280eb351d2433ba7618" gracePeriod=30 Nov 21 13:58:43 crc kubenswrapper[4675]: I1121 13:58:43.938732 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87a33291-326b-4010-a851-ec2d41c8a754" containerName="sg-core" containerID="cri-o://f0595c9aa871172606b2199a4303c0bfa0ff490f6d45dcaf0c162699ace1059b" gracePeriod=30 Nov 21 13:58:44 crc kubenswrapper[4675]: I1121 13:58:44.400550 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6kxg7" podUID="78391e3d-3bab-469a-a163-3729fdf23773" containerName="registry-server" probeResult="failure" output=< Nov 21 13:58:44 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 13:58:44 crc kubenswrapper[4675]: > Nov 21 13:58:44 crc kubenswrapper[4675]: I1121 13:58:44.861746 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef666cb3-3002-47d0-9aec-c53581c6a688" path="/var/lib/kubelet/pods/ef666cb3-3002-47d0-9aec-c53581c6a688/volumes" Nov 21 13:58:44 crc kubenswrapper[4675]: I1121 13:58:44.960831 4675 generic.go:334] "Generic (PLEG): container finished" podID="87a33291-326b-4010-a851-ec2d41c8a754" containerID="6492a53bd1ca898e9554f015a8404ce9fd6edaf2f076d280eb351d2433ba7618" exitCode=0 Nov 21 13:58:44 crc kubenswrapper[4675]: I1121 13:58:44.960867 4675 generic.go:334] "Generic (PLEG): container finished" podID="87a33291-326b-4010-a851-ec2d41c8a754" containerID="f0595c9aa871172606b2199a4303c0bfa0ff490f6d45dcaf0c162699ace1059b" exitCode=2 Nov 21 13:58:44 crc kubenswrapper[4675]: I1121 13:58:44.960875 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87a33291-326b-4010-a851-ec2d41c8a754","Type":"ContainerDied","Data":"6492a53bd1ca898e9554f015a8404ce9fd6edaf2f076d280eb351d2433ba7618"} Nov 21 13:58:44 crc kubenswrapper[4675]: I1121 13:58:44.960925 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87a33291-326b-4010-a851-ec2d41c8a754","Type":"ContainerDied","Data":"f0595c9aa871172606b2199a4303c0bfa0ff490f6d45dcaf0c162699ace1059b"} Nov 21 13:58:45 crc kubenswrapper[4675]: I1121 13:58:45.983603 4675 generic.go:334] "Generic (PLEG): container finished" podID="87a33291-326b-4010-a851-ec2d41c8a754" containerID="429065759d82e3c2cb6911f24413045efeca9cd4957254ad1098e3f6511b963c" exitCode=0 Nov 21 13:58:45 crc kubenswrapper[4675]: I1121 13:58:45.983640 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87a33291-326b-4010-a851-ec2d41c8a754","Type":"ContainerDied","Data":"429065759d82e3c2cb6911f24413045efeca9cd4957254ad1098e3f6511b963c"} Nov 21 13:58:45 crc kubenswrapper[4675]: I1121 13:58:45.985734 4675 generic.go:334] "Generic (PLEG): container finished" podID="9ef50e12-86e6-4c25-b99e-4fc6506d3890" containerID="d87f3e031452ac6493d5a8f97e1741eb98eae6e086bc9b8294ffedbb54990093" exitCode=0 Nov 21 13:58:45 crc kubenswrapper[4675]: I1121 13:58:45.985795 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ljvgq" event={"ID":"9ef50e12-86e6-4c25-b99e-4fc6506d3890","Type":"ContainerDied","Data":"d87f3e031452ac6493d5a8f97e1741eb98eae6e086bc9b8294ffedbb54990093"} Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.135926 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.135976 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.408799 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.545927 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch8mq\" (UniqueName: \"kubernetes.io/projected/87a33291-326b-4010-a851-ec2d41c8a754-kube-api-access-ch8mq\") pod \"87a33291-326b-4010-a851-ec2d41c8a754\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.545991 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-sg-core-conf-yaml\") pod \"87a33291-326b-4010-a851-ec2d41c8a754\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.546024 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-combined-ca-bundle\") pod \"87a33291-326b-4010-a851-ec2d41c8a754\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.546056 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87a33291-326b-4010-a851-ec2d41c8a754-log-httpd\") pod \"87a33291-326b-4010-a851-ec2d41c8a754\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.546091 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-config-data\") pod \"87a33291-326b-4010-a851-ec2d41c8a754\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.546122 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-scripts\") pod \"87a33291-326b-4010-a851-ec2d41c8a754\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.546193 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87a33291-326b-4010-a851-ec2d41c8a754-run-httpd\") pod \"87a33291-326b-4010-a851-ec2d41c8a754\" (UID: \"87a33291-326b-4010-a851-ec2d41c8a754\") " Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.546840 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87a33291-326b-4010-a851-ec2d41c8a754-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "87a33291-326b-4010-a851-ec2d41c8a754" (UID: "87a33291-326b-4010-a851-ec2d41c8a754"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.546883 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87a33291-326b-4010-a851-ec2d41c8a754-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "87a33291-326b-4010-a851-ec2d41c8a754" (UID: "87a33291-326b-4010-a851-ec2d41c8a754"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.552292 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-scripts" (OuterVolumeSpecName: "scripts") pod "87a33291-326b-4010-a851-ec2d41c8a754" (UID: "87a33291-326b-4010-a851-ec2d41c8a754"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.552310 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a33291-326b-4010-a851-ec2d41c8a754-kube-api-access-ch8mq" (OuterVolumeSpecName: "kube-api-access-ch8mq") pod "87a33291-326b-4010-a851-ec2d41c8a754" (UID: "87a33291-326b-4010-a851-ec2d41c8a754"). InnerVolumeSpecName "kube-api-access-ch8mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.582450 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "87a33291-326b-4010-a851-ec2d41c8a754" (UID: "87a33291-326b-4010-a851-ec2d41c8a754"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.604634 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87a33291-326b-4010-a851-ec2d41c8a754" (UID: "87a33291-326b-4010-a851-ec2d41c8a754"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.638398 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-config-data" (OuterVolumeSpecName: "config-data") pod "87a33291-326b-4010-a851-ec2d41c8a754" (UID: "87a33291-326b-4010-a851-ec2d41c8a754"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.648892 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.648927 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87a33291-326b-4010-a851-ec2d41c8a754-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.648938 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch8mq\" (UniqueName: \"kubernetes.io/projected/87a33291-326b-4010-a851-ec2d41c8a754-kube-api-access-ch8mq\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.648951 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.648960 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.648976 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87a33291-326b-4010-a851-ec2d41c8a754-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.648984 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a33291-326b-4010-a851-ec2d41c8a754-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:46 crc kubenswrapper[4675]: I1121 13:58:46.999814 4675 generic.go:334] "Generic (PLEG): container finished" podID="baacdfb7-787a-462a-8102-472a47283224" containerID="2f21b9737b3dc310046ef45cb62c0bfa007fee44b01d7166c51d06bba472e35b" exitCode=0 Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:46.999909 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-n65cw" event={"ID":"baacdfb7-787a-462a-8102-472a47283224","Type":"ContainerDied","Data":"2f21b9737b3dc310046ef45cb62c0bfa007fee44b01d7166c51d06bba472e35b"} Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.005393 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.005502 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87a33291-326b-4010-a851-ec2d41c8a754","Type":"ContainerDied","Data":"16ec8eaf3bb4f0db4554d179856d629e9a3dabab2a0c31bf59491c01a81efee2"} Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.005740 4675 scope.go:117] "RemoveContainer" containerID="6492a53bd1ca898e9554f015a8404ce9fd6edaf2f076d280eb351d2433ba7618" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.037664 4675 scope.go:117] "RemoveContainer" containerID="f0595c9aa871172606b2199a4303c0bfa0ff490f6d45dcaf0c162699ace1059b" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.066920 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.076562 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.101643 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:47 crc kubenswrapper[4675]: E1121 13:58:47.102333 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a33291-326b-4010-a851-ec2d41c8a754" containerName="ceilometer-notification-agent" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.102428 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a33291-326b-4010-a851-ec2d41c8a754" containerName="ceilometer-notification-agent" Nov 21 13:58:47 crc kubenswrapper[4675]: E1121 13:58:47.102508 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a33291-326b-4010-a851-ec2d41c8a754" containerName="sg-core" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.102558 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a33291-326b-4010-a851-ec2d41c8a754" containerName="sg-core" Nov 21 13:58:47 crc kubenswrapper[4675]: E1121 13:58:47.102615 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef666cb3-3002-47d0-9aec-c53581c6a688" containerName="dnsmasq-dns" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.102671 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef666cb3-3002-47d0-9aec-c53581c6a688" containerName="dnsmasq-dns" Nov 21 13:58:47 crc kubenswrapper[4675]: E1121 13:58:47.102740 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a33291-326b-4010-a851-ec2d41c8a754" containerName="proxy-httpd" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.102788 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a33291-326b-4010-a851-ec2d41c8a754" containerName="proxy-httpd" Nov 21 13:58:47 crc kubenswrapper[4675]: E1121 13:58:47.102856 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef666cb3-3002-47d0-9aec-c53581c6a688" containerName="init" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.102910 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef666cb3-3002-47d0-9aec-c53581c6a688" containerName="init" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.103248 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a33291-326b-4010-a851-ec2d41c8a754" containerName="ceilometer-notification-agent" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.103320 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a33291-326b-4010-a851-ec2d41c8a754" containerName="proxy-httpd" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.103376 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef666cb3-3002-47d0-9aec-c53581c6a688" containerName="dnsmasq-dns" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.103436 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a33291-326b-4010-a851-ec2d41c8a754" containerName="sg-core" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.105036 4675 scope.go:117] "RemoveContainer" containerID="429065759d82e3c2cb6911f24413045efeca9cd4957254ad1098e3f6511b963c" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.105521 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.109441 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.109902 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.134478 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.172921 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-scripts\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.173039 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a9c792a-f9a3-416b-b131-ac61338200da-run-httpd\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.173104 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.173252 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvxmw\" (UniqueName: \"kubernetes.io/projected/1a9c792a-f9a3-416b-b131-ac61338200da-kube-api-access-xvxmw\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.173640 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a9c792a-f9a3-416b-b131-ac61338200da-log-httpd\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.173695 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-config-data\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.173800 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.276701 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-scripts\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.276824 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a9c792a-f9a3-416b-b131-ac61338200da-run-httpd\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.276847 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.276909 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvxmw\" (UniqueName: \"kubernetes.io/projected/1a9c792a-f9a3-416b-b131-ac61338200da-kube-api-access-xvxmw\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.276949 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a9c792a-f9a3-416b-b131-ac61338200da-log-httpd\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.276975 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-config-data\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.277011 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.278530 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a9c792a-f9a3-416b-b131-ac61338200da-run-httpd\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.281307 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.281538 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a9c792a-f9a3-416b-b131-ac61338200da-log-httpd\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.290174 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-scripts\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.300850 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-config-data\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.302386 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvxmw\" (UniqueName: \"kubernetes.io/projected/1a9c792a-f9a3-416b-b131-ac61338200da-kube-api-access-xvxmw\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.326869 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.447848 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.459210 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-jpn8w" podUID="ef666cb3-3002-47d0-9aec-c53581c6a688" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.181:5353: i/o timeout" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.601723 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ljvgq" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.684685 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef50e12-86e6-4c25-b99e-4fc6506d3890-combined-ca-bundle\") pod \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\" (UID: \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\") " Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.684780 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrm2g\" (UniqueName: \"kubernetes.io/projected/9ef50e12-86e6-4c25-b99e-4fc6506d3890-kube-api-access-jrm2g\") pod \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\" (UID: \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\") " Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.684894 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ef50e12-86e6-4c25-b99e-4fc6506d3890-db-sync-config-data\") pod \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\" (UID: \"9ef50e12-86e6-4c25-b99e-4fc6506d3890\") " Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.696113 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef50e12-86e6-4c25-b99e-4fc6506d3890-kube-api-access-jrm2g" (OuterVolumeSpecName: "kube-api-access-jrm2g") pod "9ef50e12-86e6-4c25-b99e-4fc6506d3890" (UID: "9ef50e12-86e6-4c25-b99e-4fc6506d3890"). InnerVolumeSpecName "kube-api-access-jrm2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.702269 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef50e12-86e6-4c25-b99e-4fc6506d3890-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9ef50e12-86e6-4c25-b99e-4fc6506d3890" (UID: "9ef50e12-86e6-4c25-b99e-4fc6506d3890"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.761694 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef50e12-86e6-4c25-b99e-4fc6506d3890-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ef50e12-86e6-4c25-b99e-4fc6506d3890" (UID: "9ef50e12-86e6-4c25-b99e-4fc6506d3890"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.789591 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef50e12-86e6-4c25-b99e-4fc6506d3890-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.789631 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrm2g\" (UniqueName: \"kubernetes.io/projected/9ef50e12-86e6-4c25-b99e-4fc6506d3890-kube-api-access-jrm2g\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.789647 4675 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ef50e12-86e6-4c25-b99e-4fc6506d3890-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:47 crc kubenswrapper[4675]: W1121 13:58:47.936029 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a9c792a_f9a3_416b_b131_ac61338200da.slice/crio-b47377ae6a41f1382543cd740d1cc0fea5bdb9c47b8cd2ca05fdca27c5852cea WatchSource:0}: Error finding container b47377ae6a41f1382543cd740d1cc0fea5bdb9c47b8cd2ca05fdca27c5852cea: Status 404 returned error can't find the container with id b47377ae6a41f1382543cd740d1cc0fea5bdb9c47b8cd2ca05fdca27c5852cea Nov 21 13:58:47 crc kubenswrapper[4675]: I1121 13:58:47.942915 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.035933 4675 generic.go:334] "Generic (PLEG): container finished" podID="d8406cb5-f871-4355-811c-7090afd8aa2e" containerID="22ede8ee41a76ab5eec6d7f27028d967e19f6633224401e6810cb5f13928bcda" exitCode=0 Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.036034 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w28m5" event={"ID":"d8406cb5-f871-4355-811c-7090afd8aa2e","Type":"ContainerDied","Data":"22ede8ee41a76ab5eec6d7f27028d967e19f6633224401e6810cb5f13928bcda"} Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.038435 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a9c792a-f9a3-416b-b131-ac61338200da","Type":"ContainerStarted","Data":"b47377ae6a41f1382543cd740d1cc0fea5bdb9c47b8cd2ca05fdca27c5852cea"} Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.047089 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ljvgq" event={"ID":"9ef50e12-86e6-4c25-b99e-4fc6506d3890","Type":"ContainerDied","Data":"5ba3bb58e75f108d6c776f77fb36eba97d86942b477e78940626b487e698b73b"} Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.047137 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ba3bb58e75f108d6c776f77fb36eba97d86942b477e78940626b487e698b73b" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.047102 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ljvgq" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.277636 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6998886fc9-xdttj"] Nov 21 13:58:48 crc kubenswrapper[4675]: E1121 13:58:48.278489 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef50e12-86e6-4c25-b99e-4fc6506d3890" containerName="barbican-db-sync" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.278518 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef50e12-86e6-4c25-b99e-4fc6506d3890" containerName="barbican-db-sync" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.278796 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef50e12-86e6-4c25-b99e-4fc6506d3890" containerName="barbican-db-sync" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.280275 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.287684 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.308560 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-57c587945d-p7z7g"] Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.317003 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.317682 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5zw64" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.326851 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.329757 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.382697 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6998886fc9-xdttj"] Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.411592 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57c587945d-p7z7g"] Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.413146 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521461a2-7f1f-43b2-8ff9-be3a054e25f6-combined-ca-bundle\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.413189 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/521461a2-7f1f-43b2-8ff9-be3a054e25f6-logs\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.413224 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fsps\" (UniqueName: \"kubernetes.io/projected/521461a2-7f1f-43b2-8ff9-be3a054e25f6-kube-api-access-4fsps\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.413306 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/521461a2-7f1f-43b2-8ff9-be3a054e25f6-config-data-custom\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.413382 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4328076-0e3d-40b2-b686-502e7f263a2c-config-data-custom\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.413451 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4328076-0e3d-40b2-b686-502e7f263a2c-logs\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.413521 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4328076-0e3d-40b2-b686-502e7f263a2c-combined-ca-bundle\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.413582 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521461a2-7f1f-43b2-8ff9-be3a054e25f6-config-data\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.413609 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blxhp\" (UniqueName: \"kubernetes.io/projected/d4328076-0e3d-40b2-b686-502e7f263a2c-kube-api-access-blxhp\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.413664 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4328076-0e3d-40b2-b686-502e7f263a2c-config-data\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.514927 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-fd9vx"] Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.516420 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4328076-0e3d-40b2-b686-502e7f263a2c-combined-ca-bundle\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.516511 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521461a2-7f1f-43b2-8ff9-be3a054e25f6-config-data\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.516543 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blxhp\" (UniqueName: \"kubernetes.io/projected/d4328076-0e3d-40b2-b686-502e7f263a2c-kube-api-access-blxhp\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.516594 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4328076-0e3d-40b2-b686-502e7f263a2c-config-data\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.516678 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521461a2-7f1f-43b2-8ff9-be3a054e25f6-combined-ca-bundle\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.516712 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fsps\" (UniqueName: \"kubernetes.io/projected/521461a2-7f1f-43b2-8ff9-be3a054e25f6-kube-api-access-4fsps\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.516738 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/521461a2-7f1f-43b2-8ff9-be3a054e25f6-logs\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.516815 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/521461a2-7f1f-43b2-8ff9-be3a054e25f6-config-data-custom\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.516874 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4328076-0e3d-40b2-b686-502e7f263a2c-config-data-custom\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.516927 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4328076-0e3d-40b2-b686-502e7f263a2c-logs\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.518108 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4328076-0e3d-40b2-b686-502e7f263a2c-logs\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.518685 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.520245 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/521461a2-7f1f-43b2-8ff9-be3a054e25f6-logs\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.522472 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521461a2-7f1f-43b2-8ff9-be3a054e25f6-combined-ca-bundle\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.524952 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/521461a2-7f1f-43b2-8ff9-be3a054e25f6-config-data-custom\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.530148 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4328076-0e3d-40b2-b686-502e7f263a2c-combined-ca-bundle\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.543916 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4328076-0e3d-40b2-b686-502e7f263a2c-config-data-custom\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.545021 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-fd9vx"] Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.545686 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4328076-0e3d-40b2-b686-502e7f263a2c-config-data\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.551291 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521461a2-7f1f-43b2-8ff9-be3a054e25f6-config-data\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.567299 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fsps\" (UniqueName: \"kubernetes.io/projected/521461a2-7f1f-43b2-8ff9-be3a054e25f6-kube-api-access-4fsps\") pod \"barbican-keystone-listener-57c587945d-p7z7g\" (UID: \"521461a2-7f1f-43b2-8ff9-be3a054e25f6\") " pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.578888 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blxhp\" (UniqueName: \"kubernetes.io/projected/d4328076-0e3d-40b2-b686-502e7f263a2c-kube-api-access-blxhp\") pod \"barbican-worker-6998886fc9-xdttj\" (UID: \"d4328076-0e3d-40b2-b686-502e7f263a2c\") " pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.613606 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6998886fc9-xdttj" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.619251 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.619313 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.619606 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk9d9\" (UniqueName: \"kubernetes.io/projected/750e0c98-5881-49bf-b13a-2346d8a5efd9-kube-api-access-gk9d9\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.619744 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-config\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.619795 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.619851 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.630558 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-74b6656858-qfxkd"] Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.642848 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.651353 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.671096 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.674266 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74b6656858-qfxkd"] Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.697625 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-n65cw" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.723665 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-config-data\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.723816 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk9d9\" (UniqueName: \"kubernetes.io/projected/750e0c98-5881-49bf-b13a-2346d8a5efd9-kube-api-access-gk9d9\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.723867 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-config-data-custom\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.724019 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-config\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.724056 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmmxw\" (UniqueName: \"kubernetes.io/projected/cefea82b-2103-4d35-a134-a1f96a2abc5f-kube-api-access-nmmxw\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.724110 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-combined-ca-bundle\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.724148 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.724228 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.724259 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cefea82b-2103-4d35-a134-a1f96a2abc5f-logs\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.724321 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.724350 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.727792 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-config\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.729353 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.729437 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.729665 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.743019 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.758927 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk9d9\" (UniqueName: \"kubernetes.io/projected/750e0c98-5881-49bf-b13a-2346d8a5efd9-kube-api-access-gk9d9\") pod \"dnsmasq-dns-848cf88cfc-fd9vx\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.832284 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmcs5\" (UniqueName: \"kubernetes.io/projected/baacdfb7-787a-462a-8102-472a47283224-kube-api-access-fmcs5\") pod \"baacdfb7-787a-462a-8102-472a47283224\" (UID: \"baacdfb7-787a-462a-8102-472a47283224\") " Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.832339 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baacdfb7-787a-462a-8102-472a47283224-config-data\") pod \"baacdfb7-787a-462a-8102-472a47283224\" (UID: \"baacdfb7-787a-462a-8102-472a47283224\") " Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.832475 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baacdfb7-787a-462a-8102-472a47283224-combined-ca-bundle\") pod \"baacdfb7-787a-462a-8102-472a47283224\" (UID: \"baacdfb7-787a-462a-8102-472a47283224\") " Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.832983 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-config-data-custom\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.833138 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmmxw\" (UniqueName: \"kubernetes.io/projected/cefea82b-2103-4d35-a134-a1f96a2abc5f-kube-api-access-nmmxw\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.833174 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-combined-ca-bundle\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.833260 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cefea82b-2103-4d35-a134-a1f96a2abc5f-logs\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.833353 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-config-data\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.841137 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cefea82b-2103-4d35-a134-a1f96a2abc5f-logs\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.843289 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baacdfb7-787a-462a-8102-472a47283224-kube-api-access-fmcs5" (OuterVolumeSpecName: "kube-api-access-fmcs5") pod "baacdfb7-787a-462a-8102-472a47283224" (UID: "baacdfb7-787a-462a-8102-472a47283224"). InnerVolumeSpecName "kube-api-access-fmcs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.843319 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-config-data\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.846324 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-config-data-custom\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.856327 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-combined-ca-bundle\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.885907 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmmxw\" (UniqueName: \"kubernetes.io/projected/cefea82b-2103-4d35-a134-a1f96a2abc5f-kube-api-access-nmmxw\") pod \"barbican-api-74b6656858-qfxkd\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.936139 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baacdfb7-787a-462a-8102-472a47283224-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baacdfb7-787a-462a-8102-472a47283224" (UID: "baacdfb7-787a-462a-8102-472a47283224"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.945294 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmcs5\" (UniqueName: \"kubernetes.io/projected/baacdfb7-787a-462a-8102-472a47283224-kube-api-access-fmcs5\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.946868 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87a33291-326b-4010-a851-ec2d41c8a754" path="/var/lib/kubelet/pods/87a33291-326b-4010-a851-ec2d41c8a754/volumes" Nov 21 13:58:48 crc kubenswrapper[4675]: I1121 13:58:48.947041 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baacdfb7-787a-462a-8102-472a47283224-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:49 crc kubenswrapper[4675]: I1121 13:58:49.010506 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baacdfb7-787a-462a-8102-472a47283224-config-data" (OuterVolumeSpecName: "config-data") pod "baacdfb7-787a-462a-8102-472a47283224" (UID: "baacdfb7-787a-462a-8102-472a47283224"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:49 crc kubenswrapper[4675]: I1121 13:58:49.025955 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:49 crc kubenswrapper[4675]: I1121 13:58:49.051011 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baacdfb7-787a-462a-8102-472a47283224-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:49 crc kubenswrapper[4675]: I1121 13:58:49.060594 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:49 crc kubenswrapper[4675]: I1121 13:58:49.170193 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-n65cw" Nov 21 13:58:49 crc kubenswrapper[4675]: I1121 13:58:49.175906 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-n65cw" event={"ID":"baacdfb7-787a-462a-8102-472a47283224","Type":"ContainerDied","Data":"a4955f99decc7bb4e350cd08abd4342b674910a3ea08e0fdb0f0678c4b0dabc3"} Nov 21 13:58:49 crc kubenswrapper[4675]: I1121 13:58:49.176048 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4955f99decc7bb4e350cd08abd4342b674910a3ea08e0fdb0f0678c4b0dabc3" Nov 21 13:58:49 crc kubenswrapper[4675]: I1121 13:58:49.205568 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a9c792a-f9a3-416b-b131-ac61338200da","Type":"ContainerStarted","Data":"c098e4bd6707d7f500240a9676157dd14fcc74c844b5aa3d7316c14abb374ca6"} Nov 21 13:58:49 crc kubenswrapper[4675]: I1121 13:58:49.343339 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6998886fc9-xdttj"] Nov 21 13:58:49 crc kubenswrapper[4675]: W1121 13:58:49.356342 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4328076_0e3d_40b2_b686_502e7f263a2c.slice/crio-9c00b1bbb52f0db5fd7c59e168404e5e0ed6658e44210cabff525671178e3d26 WatchSource:0}: Error finding container 9c00b1bbb52f0db5fd7c59e168404e5e0ed6658e44210cabff525671178e3d26: Status 404 returned error can't find the container with id 9c00b1bbb52f0db5fd7c59e168404e5e0ed6658e44210cabff525671178e3d26 Nov 21 13:58:49 crc kubenswrapper[4675]: I1121 13:58:49.565159 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57c587945d-p7z7g"] Nov 21 13:58:49 crc kubenswrapper[4675]: W1121 13:58:49.574005 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod521461a2_7f1f_43b2_8ff9_be3a054e25f6.slice/crio-5d2d53b949f761b580088fa6ebb2283e5b50f9ab63157201544292d6c9fd2955 WatchSource:0}: Error finding container 5d2d53b949f761b580088fa6ebb2283e5b50f9ab63157201544292d6c9fd2955: Status 404 returned error can't find the container with id 5d2d53b949f761b580088fa6ebb2283e5b50f9ab63157201544292d6c9fd2955 Nov 21 13:58:49 crc kubenswrapper[4675]: I1121 13:58:49.850088 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w28m5" Nov 21 13:58:49 crc kubenswrapper[4675]: I1121 13:58:49.927850 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-fd9vx"] Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.022742 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-db-sync-config-data\") pod \"d8406cb5-f871-4355-811c-7090afd8aa2e\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.022786 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8406cb5-f871-4355-811c-7090afd8aa2e-etc-machine-id\") pod \"d8406cb5-f871-4355-811c-7090afd8aa2e\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.022835 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-config-data\") pod \"d8406cb5-f871-4355-811c-7090afd8aa2e\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.022877 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-combined-ca-bundle\") pod \"d8406cb5-f871-4355-811c-7090afd8aa2e\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.022941 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtdfh\" (UniqueName: \"kubernetes.io/projected/d8406cb5-f871-4355-811c-7090afd8aa2e-kube-api-access-vtdfh\") pod \"d8406cb5-f871-4355-811c-7090afd8aa2e\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.022988 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-scripts\") pod \"d8406cb5-f871-4355-811c-7090afd8aa2e\" (UID: \"d8406cb5-f871-4355-811c-7090afd8aa2e\") " Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.049458 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-scripts" (OuterVolumeSpecName: "scripts") pod "d8406cb5-f871-4355-811c-7090afd8aa2e" (UID: "d8406cb5-f871-4355-811c-7090afd8aa2e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.053467 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8406cb5-f871-4355-811c-7090afd8aa2e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d8406cb5-f871-4355-811c-7090afd8aa2e" (UID: "d8406cb5-f871-4355-811c-7090afd8aa2e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.112036 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d8406cb5-f871-4355-811c-7090afd8aa2e" (UID: "d8406cb5-f871-4355-811c-7090afd8aa2e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.141791 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8406cb5-f871-4355-811c-7090afd8aa2e-kube-api-access-vtdfh" (OuterVolumeSpecName: "kube-api-access-vtdfh") pod "d8406cb5-f871-4355-811c-7090afd8aa2e" (UID: "d8406cb5-f871-4355-811c-7090afd8aa2e"). InnerVolumeSpecName "kube-api-access-vtdfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.144734 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.144768 4675 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.144781 4675 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8406cb5-f871-4355-811c-7090afd8aa2e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.144792 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtdfh\" (UniqueName: \"kubernetes.io/projected/d8406cb5-f871-4355-811c-7090afd8aa2e-kube-api-access-vtdfh\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.205206 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8406cb5-f871-4355-811c-7090afd8aa2e" (UID: "d8406cb5-f871-4355-811c-7090afd8aa2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.282274 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.299849 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74b6656858-qfxkd"] Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.413302 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-config-data" (OuterVolumeSpecName: "config-data") pod "d8406cb5-f871-4355-811c-7090afd8aa2e" (UID: "d8406cb5-f871-4355-811c-7090afd8aa2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.432398 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" event={"ID":"521461a2-7f1f-43b2-8ff9-be3a054e25f6","Type":"ContainerStarted","Data":"5d2d53b949f761b580088fa6ebb2283e5b50f9ab63157201544292d6c9fd2955"} Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.446413 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:58:50 crc kubenswrapper[4675]: E1121 13:58:50.447157 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baacdfb7-787a-462a-8102-472a47283224" containerName="heat-db-sync" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.447175 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="baacdfb7-787a-462a-8102-472a47283224" containerName="heat-db-sync" Nov 21 13:58:50 crc kubenswrapper[4675]: E1121 13:58:50.447197 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8406cb5-f871-4355-811c-7090afd8aa2e" containerName="cinder-db-sync" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.447205 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8406cb5-f871-4355-811c-7090afd8aa2e" containerName="cinder-db-sync" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.456142 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8406cb5-f871-4355-811c-7090afd8aa2e" containerName="cinder-db-sync" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.456193 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="baacdfb7-787a-462a-8102-472a47283224" containerName="heat-db-sync" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.457519 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.461399 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.472380 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" event={"ID":"750e0c98-5881-49bf-b13a-2346d8a5efd9","Type":"ContainerStarted","Data":"500d976821efc383faa8750004eef832f21d440bea4ef6540d585d74071f7354"} Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.490666 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.490729 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a639838-cd16-4c4b-9a5c-a76396f54d9e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.490931 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.490990 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-scripts\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.491108 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-config-data\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.491163 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6vbm\" (UniqueName: \"kubernetes.io/projected/6a639838-cd16-4c4b-9a5c-a76396f54d9e-kube-api-access-p6vbm\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.491272 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8406cb5-f871-4355-811c-7090afd8aa2e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.498785 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w28m5" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.498778 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w28m5" event={"ID":"d8406cb5-f871-4355-811c-7090afd8aa2e","Type":"ContainerDied","Data":"dd008beade3f9ea953bed91543ce6c517cec17e50feef22cf9b542aa39619a14"} Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.499235 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd008beade3f9ea953bed91543ce6c517cec17e50feef22cf9b542aa39619a14" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.527175 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.531651 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a9c792a-f9a3-416b-b131-ac61338200da","Type":"ContainerStarted","Data":"a2eab34636acae345c92e2d8d0fd78a5a04945f4676ff2ad7f9782407e9d14cb"} Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.559689 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6998886fc9-xdttj" event={"ID":"d4328076-0e3d-40b2-b686-502e7f263a2c","Type":"ContainerStarted","Data":"9c00b1bbb52f0db5fd7c59e168404e5e0ed6658e44210cabff525671178e3d26"} Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.596677 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-config-data\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.596749 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6vbm\" (UniqueName: \"kubernetes.io/projected/6a639838-cd16-4c4b-9a5c-a76396f54d9e-kube-api-access-p6vbm\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.596891 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.596932 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a639838-cd16-4c4b-9a5c-a76396f54d9e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.596956 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.596984 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-scripts\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.598371 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a639838-cd16-4c4b-9a5c-a76396f54d9e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.605041 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-scripts\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.606805 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.610835 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-config-data\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.615130 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-fd9vx"] Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.618256 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.633917 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6vbm\" (UniqueName: \"kubernetes.io/projected/6a639838-cd16-4c4b-9a5c-a76396f54d9e-kube-api-access-p6vbm\") pod \"cinder-scheduler-0\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.723010 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-8z4kx"] Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.725118 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.761894 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-8z4kx"] Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.784253 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.786288 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.793377 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.794158 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.804610 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.804719 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-config\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.804766 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-dns-svc\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.805472 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.805526 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.805612 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdfjm\" (UniqueName: \"kubernetes.io/projected/18c5726b-3250-4397-bbed-a88da8daa0de-kube-api-access-fdfjm\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.834769 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.907041 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.908143 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.908354 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.908452 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-scripts\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.908511 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-config-data-custom\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.908546 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdfjm\" (UniqueName: \"kubernetes.io/projected/18c5726b-3250-4397-bbed-a88da8daa0de-kube-api-access-fdfjm\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.908633 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.908707 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vxkd\" (UniqueName: \"kubernetes.io/projected/b7c2373f-bce5-424a-9747-56897bb05444-kube-api-access-7vxkd\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.908724 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-config\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.908755 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7c2373f-bce5-424a-9747-56897bb05444-logs\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.908794 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-dns-svc\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.908813 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.908849 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7c2373f-bce5-424a-9747-56897bb05444-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.908892 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-config-data\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.911000 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.911584 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-config\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.916857 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.916990 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-dns-svc\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:50 crc kubenswrapper[4675]: I1121 13:58:50.947951 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdfjm\" (UniqueName: \"kubernetes.io/projected/18c5726b-3250-4397-bbed-a88da8daa0de-kube-api-access-fdfjm\") pod \"dnsmasq-dns-6578955fd5-8z4kx\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.010276 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-scripts\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.010357 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-config-data-custom\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.010645 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vxkd\" (UniqueName: \"kubernetes.io/projected/b7c2373f-bce5-424a-9747-56897bb05444-kube-api-access-7vxkd\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.010677 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7c2373f-bce5-424a-9747-56897bb05444-logs\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.010709 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.010731 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7c2373f-bce5-424a-9747-56897bb05444-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.010762 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-config-data\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.012391 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7c2373f-bce5-424a-9747-56897bb05444-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.012788 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7c2373f-bce5-424a-9747-56897bb05444-logs\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.020121 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-config-data\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.020217 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.021654 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-config-data-custom\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.022968 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-scripts\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.049365 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vxkd\" (UniqueName: \"kubernetes.io/projected/b7c2373f-bce5-424a-9747-56897bb05444-kube-api-access-7vxkd\") pod \"cinder-api-0\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.081212 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.106693 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.481403 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.601971 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a639838-cd16-4c4b-9a5c-a76396f54d9e","Type":"ContainerStarted","Data":"9731978e9f7f0fa58b9202557fda03f168db2c0882f5903ee3f692dbe9b40486"} Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.608263 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a9c792a-f9a3-416b-b131-ac61338200da","Type":"ContainerStarted","Data":"a16920f6623c44c18f72947a03388fb206f87260eb80bbaaf9dc97629a146bd4"} Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.611252 4675 generic.go:334] "Generic (PLEG): container finished" podID="750e0c98-5881-49bf-b13a-2346d8a5efd9" containerID="45f7fd88cbe72d97a391fcdfa42e966dba0616ec61da3190a8b0f8419ebb7e71" exitCode=0 Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.611309 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" event={"ID":"750e0c98-5881-49bf-b13a-2346d8a5efd9","Type":"ContainerDied","Data":"45f7fd88cbe72d97a391fcdfa42e966dba0616ec61da3190a8b0f8419ebb7e71"} Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.620288 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b6656858-qfxkd" event={"ID":"cefea82b-2103-4d35-a134-a1f96a2abc5f","Type":"ContainerStarted","Data":"980c00486e8cbacbbdc77c27cb5b0acdcaa81e78a9f76deb4c806ec40a114280"} Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.620345 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b6656858-qfxkd" event={"ID":"cefea82b-2103-4d35-a134-a1f96a2abc5f","Type":"ContainerStarted","Data":"d4aa957b443d6796e49f622dbad17c869ebe5a4fd8c2ae3002391c5b8f4f2819"} Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.620358 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b6656858-qfxkd" event={"ID":"cefea82b-2103-4d35-a134-a1f96a2abc5f","Type":"ContainerStarted","Data":"27ba54ee79cc0894047ed3b5a4c68cdc7b86a5729ba32cd46e93395caccc4944"} Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.621401 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.621477 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.661508 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-74b6656858-qfxkd" podStartSLOduration=3.661492342 podStartE2EDuration="3.661492342s" podCreationTimestamp="2025-11-21 13:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:51.661044361 +0000 UTC m=+1608.387459088" watchObservedRunningTime="2025-11-21 13:58:51.661492342 +0000 UTC m=+1608.387907069" Nov 21 13:58:51 crc kubenswrapper[4675]: I1121 13:58:51.800467 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-8z4kx"] Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.179372 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.450941 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.487092 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk9d9\" (UniqueName: \"kubernetes.io/projected/750e0c98-5881-49bf-b13a-2346d8a5efd9-kube-api-access-gk9d9\") pod \"750e0c98-5881-49bf-b13a-2346d8a5efd9\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.487175 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-dns-svc\") pod \"750e0c98-5881-49bf-b13a-2346d8a5efd9\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.487275 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-ovsdbserver-nb\") pod \"750e0c98-5881-49bf-b13a-2346d8a5efd9\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.487488 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-ovsdbserver-sb\") pod \"750e0c98-5881-49bf-b13a-2346d8a5efd9\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.487570 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-config\") pod \"750e0c98-5881-49bf-b13a-2346d8a5efd9\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.487650 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-dns-swift-storage-0\") pod \"750e0c98-5881-49bf-b13a-2346d8a5efd9\" (UID: \"750e0c98-5881-49bf-b13a-2346d8a5efd9\") " Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.520038 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750e0c98-5881-49bf-b13a-2346d8a5efd9-kube-api-access-gk9d9" (OuterVolumeSpecName: "kube-api-access-gk9d9") pod "750e0c98-5881-49bf-b13a-2346d8a5efd9" (UID: "750e0c98-5881-49bf-b13a-2346d8a5efd9"). InnerVolumeSpecName "kube-api-access-gk9d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.562462 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "750e0c98-5881-49bf-b13a-2346d8a5efd9" (UID: "750e0c98-5881-49bf-b13a-2346d8a5efd9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.564409 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "750e0c98-5881-49bf-b13a-2346d8a5efd9" (UID: "750e0c98-5881-49bf-b13a-2346d8a5efd9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.591483 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk9d9\" (UniqueName: \"kubernetes.io/projected/750e0c98-5881-49bf-b13a-2346d8a5efd9-kube-api-access-gk9d9\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.591516 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.591525 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.614870 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "750e0c98-5881-49bf-b13a-2346d8a5efd9" (UID: "750e0c98-5881-49bf-b13a-2346d8a5efd9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.630905 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "750e0c98-5881-49bf-b13a-2346d8a5efd9" (UID: "750e0c98-5881-49bf-b13a-2346d8a5efd9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.631141 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-config" (OuterVolumeSpecName: "config") pod "750e0c98-5881-49bf-b13a-2346d8a5efd9" (UID: "750e0c98-5881-49bf-b13a-2346d8a5efd9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.638392 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" event={"ID":"750e0c98-5881-49bf-b13a-2346d8a5efd9","Type":"ContainerDied","Data":"500d976821efc383faa8750004eef832f21d440bea4ef6540d585d74071f7354"} Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.638453 4675 scope.go:117] "RemoveContainer" containerID="45f7fd88cbe72d97a391fcdfa42e966dba0616ec61da3190a8b0f8419ebb7e71" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.638594 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-fd9vx" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.656589 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" event={"ID":"18c5726b-3250-4397-bbed-a88da8daa0de","Type":"ContainerStarted","Data":"5b7f19181fdb25275cd4532615ffc6a5c129b114a5f36997a51dcd1de26e84c0"} Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.663755 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b7c2373f-bce5-424a-9747-56897bb05444","Type":"ContainerStarted","Data":"0378a49471878a214b15b3fde10f8f6cfdfbd7e850257e82bbab4d32755f6a3f"} Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.694244 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.694279 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.694289 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/750e0c98-5881-49bf-b13a-2346d8a5efd9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.765154 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-fd9vx"] Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.781147 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-fd9vx"] Nov 21 13:58:52 crc kubenswrapper[4675]: I1121 13:58:52.868835 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750e0c98-5881-49bf-b13a-2346d8a5efd9" path="/var/lib/kubelet/pods/750e0c98-5881-49bf-b13a-2346d8a5efd9/volumes" Nov 21 13:58:53 crc kubenswrapper[4675]: I1121 13:58:53.613765 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:58:53 crc kubenswrapper[4675]: I1121 13:58:53.673769 4675 generic.go:334] "Generic (PLEG): container finished" podID="18c5726b-3250-4397-bbed-a88da8daa0de" containerID="99c478097479c76e6a304b82fea4b01c4a3bad183ba8b5dbd379d3c67c4c3ebc" exitCode=0 Nov 21 13:58:53 crc kubenswrapper[4675]: I1121 13:58:53.673816 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" event={"ID":"18c5726b-3250-4397-bbed-a88da8daa0de","Type":"ContainerDied","Data":"99c478097479c76e6a304b82fea4b01c4a3bad183ba8b5dbd379d3c67c4c3ebc"} Nov 21 13:58:53 crc kubenswrapper[4675]: I1121 13:58:53.675617 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b7c2373f-bce5-424a-9747-56897bb05444","Type":"ContainerStarted","Data":"13c5aa0875116ba8f569356c5c1108991e2aba4aa4786c798e52a3b032e04b35"} Nov 21 13:58:54 crc kubenswrapper[4675]: I1121 13:58:54.403195 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6kxg7" podUID="78391e3d-3bab-469a-a163-3729fdf23773" containerName="registry-server" probeResult="failure" output=< Nov 21 13:58:54 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 13:58:54 crc kubenswrapper[4675]: > Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.472511 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-85bfdcc858-xc95t"] Nov 21 13:58:55 crc kubenswrapper[4675]: E1121 13:58:55.473576 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750e0c98-5881-49bf-b13a-2346d8a5efd9" containerName="init" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.473590 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="750e0c98-5881-49bf-b13a-2346d8a5efd9" containerName="init" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.473804 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="750e0c98-5881-49bf-b13a-2346d8a5efd9" containerName="init" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.475162 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.479651 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.479805 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.532841 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85bfdcc858-xc95t"] Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.574371 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf9zs\" (UniqueName: \"kubernetes.io/projected/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-kube-api-access-qf9zs\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.574450 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-config-data\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.574515 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-public-tls-certs\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.574587 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-internal-tls-certs\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.574663 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-combined-ca-bundle\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.574687 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-logs\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.574734 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-config-data-custom\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.676753 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-combined-ca-bundle\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.677151 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-logs\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.677197 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-config-data-custom\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.677275 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf9zs\" (UniqueName: \"kubernetes.io/projected/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-kube-api-access-qf9zs\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.677323 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-config-data\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.677368 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-public-tls-certs\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.677428 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-internal-tls-certs\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.678531 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-logs\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.681578 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-internal-tls-certs\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.684859 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-combined-ca-bundle\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.685666 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-public-tls-certs\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.693260 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-config-data-custom\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.697007 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf9zs\" (UniqueName: \"kubernetes.io/projected/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-kube-api-access-qf9zs\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.698341 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370d8c4c-811f-4e1e-b801-828d8fa5d1c2-config-data\") pod \"barbican-api-85bfdcc858-xc95t\" (UID: \"370d8c4c-811f-4e1e-b801-828d8fa5d1c2\") " pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.755590 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" event={"ID":"521461a2-7f1f-43b2-8ff9-be3a054e25f6","Type":"ContainerStarted","Data":"07c9fcc48f5f099e0ede8357e6be90949caa93c8b6748e0885a52098f05a9689"} Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.755636 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" event={"ID":"521461a2-7f1f-43b2-8ff9-be3a054e25f6","Type":"ContainerStarted","Data":"9bfdd710bb90b827216357ff282930490bf7773f241b62f82808fd6a0be77976"} Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.759496 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" event={"ID":"18c5726b-3250-4397-bbed-a88da8daa0de","Type":"ContainerStarted","Data":"0ee36b2cd00b1a6b03a0d7a2f85eb018b58cf861ba33c265ed0fe3cf79cf1552"} Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.760461 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.769798 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b7c2373f-bce5-424a-9747-56897bb05444","Type":"ContainerStarted","Data":"b36d0ea5d1306c394b12fd383e8b0388fdbc8f610967e86e89064936a007db90"} Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.769979 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b7c2373f-bce5-424a-9747-56897bb05444" containerName="cinder-api-log" containerID="cri-o://13c5aa0875116ba8f569356c5c1108991e2aba4aa4786c798e52a3b032e04b35" gracePeriod=30 Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.770080 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.770115 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b7c2373f-bce5-424a-9747-56897bb05444" containerName="cinder-api" containerID="cri-o://b36d0ea5d1306c394b12fd383e8b0388fdbc8f610967e86e89064936a007db90" gracePeriod=30 Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.792984 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a9c792a-f9a3-416b-b131-ac61338200da","Type":"ContainerStarted","Data":"7b399597907711517d0542971cfa5b8956e074a0e36399f16c96ad892ef696a8"} Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.793532 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.799768 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-57c587945d-p7z7g" podStartSLOduration=3.082376522 podStartE2EDuration="7.799744753s" podCreationTimestamp="2025-11-21 13:58:48 +0000 UTC" firstStartedPulling="2025-11-21 13:58:49.597260705 +0000 UTC m=+1606.323675432" lastFinishedPulling="2025-11-21 13:58:54.314628926 +0000 UTC m=+1611.041043663" observedRunningTime="2025-11-21 13:58:55.778264966 +0000 UTC m=+1612.504679703" watchObservedRunningTime="2025-11-21 13:58:55.799744753 +0000 UTC m=+1612.526159480" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.809802 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.810727 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" podStartSLOduration=5.810711287 podStartE2EDuration="5.810711287s" podCreationTimestamp="2025-11-21 13:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:55.809836925 +0000 UTC m=+1612.536251652" watchObservedRunningTime="2025-11-21 13:58:55.810711287 +0000 UTC m=+1612.537126014" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.834446 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a639838-cd16-4c4b-9a5c-a76396f54d9e","Type":"ContainerStarted","Data":"ee9f3c2218e317e0924e0f7bf88a9b86bf0be4120eb523fc9bf0eae71a1fa8a9"} Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.840527 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6998886fc9-xdttj" event={"ID":"d4328076-0e3d-40b2-b686-502e7f263a2c","Type":"ContainerStarted","Data":"4f0a8241cbf8e744e9c54c18a499c595c8fb48e8aa8412e9495098d23c5b41b1"} Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.840566 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6998886fc9-xdttj" event={"ID":"d4328076-0e3d-40b2-b686-502e7f263a2c","Type":"ContainerStarted","Data":"b47912d7ec8d6e4652937ddf76d0f30b5df393df92d69bcfe6d151d36320b727"} Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.865840 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.865815504 podStartE2EDuration="5.865815504s" podCreationTimestamp="2025-11-21 13:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:55.853787613 +0000 UTC m=+1612.580202360" watchObservedRunningTime="2025-11-21 13:58:55.865815504 +0000 UTC m=+1612.592230231" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.908407 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.532364692 podStartE2EDuration="8.908385308s" podCreationTimestamp="2025-11-21 13:58:47 +0000 UTC" firstStartedPulling="2025-11-21 13:58:47.938647841 +0000 UTC m=+1604.665062568" lastFinishedPulling="2025-11-21 13:58:54.314668457 +0000 UTC m=+1611.041083184" observedRunningTime="2025-11-21 13:58:55.883932167 +0000 UTC m=+1612.610346894" watchObservedRunningTime="2025-11-21 13:58:55.908385308 +0000 UTC m=+1612.634800035" Nov 21 13:58:55 crc kubenswrapper[4675]: I1121 13:58:55.932347 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6998886fc9-xdttj" podStartSLOduration=2.989541742 podStartE2EDuration="7.932325746s" podCreationTimestamp="2025-11-21 13:58:48 +0000 UTC" firstStartedPulling="2025-11-21 13:58:49.371918484 +0000 UTC m=+1606.098333211" lastFinishedPulling="2025-11-21 13:58:54.314702488 +0000 UTC m=+1611.041117215" observedRunningTime="2025-11-21 13:58:55.905025864 +0000 UTC m=+1612.631440601" watchObservedRunningTime="2025-11-21 13:58:55.932325746 +0000 UTC m=+1612.658740473" Nov 21 13:58:56 crc kubenswrapper[4675]: W1121 13:58:56.545690 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod370d8c4c_811f_4e1e_b801_828d8fa5d1c2.slice/crio-c4669cf09f65d94cab9993a20610579781b990d53f3ec60f78d7a0f5ca1b09e0 WatchSource:0}: Error finding container c4669cf09f65d94cab9993a20610579781b990d53f3ec60f78d7a0f5ca1b09e0: Status 404 returned error can't find the container with id c4669cf09f65d94cab9993a20610579781b990d53f3ec60f78d7a0f5ca1b09e0 Nov 21 13:58:56 crc kubenswrapper[4675]: I1121 13:58:56.546292 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85bfdcc858-xc95t"] Nov 21 13:58:56 crc kubenswrapper[4675]: I1121 13:58:56.902245 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85bfdcc858-xc95t" event={"ID":"370d8c4c-811f-4e1e-b801-828d8fa5d1c2","Type":"ContainerStarted","Data":"c4669cf09f65d94cab9993a20610579781b990d53f3ec60f78d7a0f5ca1b09e0"} Nov 21 13:58:56 crc kubenswrapper[4675]: I1121 13:58:56.912873 4675 generic.go:334] "Generic (PLEG): container finished" podID="b7c2373f-bce5-424a-9747-56897bb05444" containerID="13c5aa0875116ba8f569356c5c1108991e2aba4aa4786c798e52a3b032e04b35" exitCode=143 Nov 21 13:58:56 crc kubenswrapper[4675]: I1121 13:58:56.912952 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b7c2373f-bce5-424a-9747-56897bb05444","Type":"ContainerDied","Data":"13c5aa0875116ba8f569356c5c1108991e2aba4aa4786c798e52a3b032e04b35"} Nov 21 13:58:56 crc kubenswrapper[4675]: I1121 13:58:56.915820 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a639838-cd16-4c4b-9a5c-a76396f54d9e","Type":"ContainerStarted","Data":"7a31cdb7d39fb102142177fd5eceae1869e3a38a86fd83285358ea2ec3ec5a4d"} Nov 21 13:58:56 crc kubenswrapper[4675]: I1121 13:58:56.944343 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.104642397 podStartE2EDuration="6.944316332s" podCreationTimestamp="2025-11-21 13:58:50 +0000 UTC" firstStartedPulling="2025-11-21 13:58:51.503605777 +0000 UTC m=+1608.230020504" lastFinishedPulling="2025-11-21 13:58:54.343279712 +0000 UTC m=+1611.069694439" observedRunningTime="2025-11-21 13:58:56.938785854 +0000 UTC m=+1613.665200581" watchObservedRunningTime="2025-11-21 13:58:56.944316332 +0000 UTC m=+1613.670731059" Nov 21 13:58:57 crc kubenswrapper[4675]: I1121 13:58:57.946760 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85bfdcc858-xc95t" event={"ID":"370d8c4c-811f-4e1e-b801-828d8fa5d1c2","Type":"ContainerStarted","Data":"6edc85b3f0fbb4d811d6d32d4f518494e9e6c9fd9c45287d16fe060d2652de1e"} Nov 21 13:58:57 crc kubenswrapper[4675]: I1121 13:58:57.948378 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:57 crc kubenswrapper[4675]: I1121 13:58:57.948561 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:58:57 crc kubenswrapper[4675]: I1121 13:58:57.948655 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85bfdcc858-xc95t" event={"ID":"370d8c4c-811f-4e1e-b801-828d8fa5d1c2","Type":"ContainerStarted","Data":"08f6886a09813ae6786e5846a062a14460f238d03002464abf8d4e027d77fdce"} Nov 21 13:58:57 crc kubenswrapper[4675]: I1121 13:58:57.976623 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-85bfdcc858-xc95t" podStartSLOduration=2.976605996 podStartE2EDuration="2.976605996s" podCreationTimestamp="2025-11-21 13:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:57.972212586 +0000 UTC m=+1614.698627313" watchObservedRunningTime="2025-11-21 13:58:57.976605996 +0000 UTC m=+1614.703020723" Nov 21 13:58:59 crc kubenswrapper[4675]: I1121 13:58:59.436917 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:59:00 crc kubenswrapper[4675]: I1121 13:59:00.836799 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 21 13:59:01 crc kubenswrapper[4675]: I1121 13:59:01.084283 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:59:01 crc kubenswrapper[4675]: I1121 13:59:01.148578 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-xh6kv"] Nov 21 13:59:01 crc kubenswrapper[4675]: I1121 13:59:01.148843 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" podUID="63c41720-bda4-42fd-a9e0-f07fa4d0ea64" containerName="dnsmasq-dns" containerID="cri-o://cdb352bd37c83e48f714633f6fa437d61bad80cfc13d8528762f511df58428d3" gracePeriod=10 Nov 21 13:59:01 crc kubenswrapper[4675]: I1121 13:59:01.215566 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 21 13:59:01 crc kubenswrapper[4675]: I1121 13:59:01.351785 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:59:01 crc kubenswrapper[4675]: I1121 13:59:01.367936 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:59:01 crc kubenswrapper[4675]: I1121 13:59:01.810437 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:59:01 crc kubenswrapper[4675]: I1121 13:59:01.962514 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:59:01 crc kubenswrapper[4675]: I1121 13:59:01.984242 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-dbf4b8f9c-tnln2" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.033359 4675 generic.go:334] "Generic (PLEG): container finished" podID="63c41720-bda4-42fd-a9e0-f07fa4d0ea64" containerID="cdb352bd37c83e48f714633f6fa437d61bad80cfc13d8528762f511df58428d3" exitCode=0 Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.033606 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6a639838-cd16-4c4b-9a5c-a76396f54d9e" containerName="cinder-scheduler" containerID="cri-o://ee9f3c2218e317e0924e0f7bf88a9b86bf0be4120eb523fc9bf0eae71a1fa8a9" gracePeriod=30 Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.034093 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.034127 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6a639838-cd16-4c4b-9a5c-a76396f54d9e" containerName="probe" containerID="cri-o://7a31cdb7d39fb102142177fd5eceae1869e3a38a86fd83285358ea2ec3ec5a4d" gracePeriod=30 Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.034086 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" event={"ID":"63c41720-bda4-42fd-a9e0-f07fa4d0ea64","Type":"ContainerDied","Data":"cdb352bd37c83e48f714633f6fa437d61bad80cfc13d8528762f511df58428d3"} Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.035278 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-xh6kv" event={"ID":"63c41720-bda4-42fd-a9e0-f07fa4d0ea64","Type":"ContainerDied","Data":"062484bc412adfbb0a9f4b292e17ff157506f4370cb6bd91dcc881593b7508c6"} Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.035307 4675 scope.go:117] "RemoveContainer" containerID="cdb352bd37c83e48f714633f6fa437d61bad80cfc13d8528762f511df58428d3" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.120180 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58fb49d57b-hpfw5"] Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.120400 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58fb49d57b-hpfw5" podUID="5e9e808c-bda6-4e6d-bd55-d94d426574c3" containerName="neutron-api" containerID="cri-o://0afd3aed69fe00b4a8898f6932f4eac85e152ed13548de4023674c9b51fce34d" gracePeriod=30 Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.121018 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58fb49d57b-hpfw5" podUID="5e9e808c-bda6-4e6d-bd55-d94d426574c3" containerName="neutron-httpd" containerID="cri-o://6be953ebc702dbb085f2794e1ff331a85dc7e242bcd41aae2ced637e99f4cba6" gracePeriod=30 Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.146877 4675 scope.go:117] "RemoveContainer" containerID="eefed8a498db59cf14dd3a26285014749670b93d3a0d86b0eb9778dff3c7ab53" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.151993 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-config\") pod \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.152308 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-ovsdbserver-nb\") pod \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.152497 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-dns-svc\") pod \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.152674 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-ovsdbserver-sb\") pod \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.152806 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-dns-swift-storage-0\") pod \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.152957 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7bqm\" (UniqueName: \"kubernetes.io/projected/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-kube-api-access-x7bqm\") pod \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\" (UID: \"63c41720-bda4-42fd-a9e0-f07fa4d0ea64\") " Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.210334 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-kube-api-access-x7bqm" (OuterVolumeSpecName: "kube-api-access-x7bqm") pod "63c41720-bda4-42fd-a9e0-f07fa4d0ea64" (UID: "63c41720-bda4-42fd-a9e0-f07fa4d0ea64"). InnerVolumeSpecName "kube-api-access-x7bqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.256165 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7bqm\" (UniqueName: \"kubernetes.io/projected/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-kube-api-access-x7bqm\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.339215 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "63c41720-bda4-42fd-a9e0-f07fa4d0ea64" (UID: "63c41720-bda4-42fd-a9e0-f07fa4d0ea64"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.355487 4675 scope.go:117] "RemoveContainer" containerID="cdb352bd37c83e48f714633f6fa437d61bad80cfc13d8528762f511df58428d3" Nov 21 13:59:02 crc kubenswrapper[4675]: E1121 13:59:02.357593 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdb352bd37c83e48f714633f6fa437d61bad80cfc13d8528762f511df58428d3\": container with ID starting with cdb352bd37c83e48f714633f6fa437d61bad80cfc13d8528762f511df58428d3 not found: ID does not exist" containerID="cdb352bd37c83e48f714633f6fa437d61bad80cfc13d8528762f511df58428d3" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.357634 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdb352bd37c83e48f714633f6fa437d61bad80cfc13d8528762f511df58428d3"} err="failed to get container status \"cdb352bd37c83e48f714633f6fa437d61bad80cfc13d8528762f511df58428d3\": rpc error: code = NotFound desc = could not find container \"cdb352bd37c83e48f714633f6fa437d61bad80cfc13d8528762f511df58428d3\": container with ID starting with cdb352bd37c83e48f714633f6fa437d61bad80cfc13d8528762f511df58428d3 not found: ID does not exist" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.357657 4675 scope.go:117] "RemoveContainer" containerID="eefed8a498db59cf14dd3a26285014749670b93d3a0d86b0eb9778dff3c7ab53" Nov 21 13:59:02 crc kubenswrapper[4675]: E1121 13:59:02.358617 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eefed8a498db59cf14dd3a26285014749670b93d3a0d86b0eb9778dff3c7ab53\": container with ID starting with eefed8a498db59cf14dd3a26285014749670b93d3a0d86b0eb9778dff3c7ab53 not found: ID does not exist" containerID="eefed8a498db59cf14dd3a26285014749670b93d3a0d86b0eb9778dff3c7ab53" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.358639 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eefed8a498db59cf14dd3a26285014749670b93d3a0d86b0eb9778dff3c7ab53"} err="failed to get container status \"eefed8a498db59cf14dd3a26285014749670b93d3a0d86b0eb9778dff3c7ab53\": rpc error: code = NotFound desc = could not find container \"eefed8a498db59cf14dd3a26285014749670b93d3a0d86b0eb9778dff3c7ab53\": container with ID starting with eefed8a498db59cf14dd3a26285014749670b93d3a0d86b0eb9778dff3c7ab53 not found: ID does not exist" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.360141 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.413989 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-config" (OuterVolumeSpecName: "config") pod "63c41720-bda4-42fd-a9e0-f07fa4d0ea64" (UID: "63c41720-bda4-42fd-a9e0-f07fa4d0ea64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.457728 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "63c41720-bda4-42fd-a9e0-f07fa4d0ea64" (UID: "63c41720-bda4-42fd-a9e0-f07fa4d0ea64"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.462499 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.462535 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.495386 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "63c41720-bda4-42fd-a9e0-f07fa4d0ea64" (UID: "63c41720-bda4-42fd-a9e0-f07fa4d0ea64"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.495668 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "63c41720-bda4-42fd-a9e0-f07fa4d0ea64" (UID: "63c41720-bda4-42fd-a9e0-f07fa4d0ea64"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.568520 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.568573 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c41720-bda4-42fd-a9e0-f07fa4d0ea64-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.729421 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-xh6kv"] Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.740527 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-xh6kv"] Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.773136 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.795703 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64487ff74-sfh5j" Nov 21 13:59:02 crc kubenswrapper[4675]: I1121 13:59:02.874766 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c41720-bda4-42fd-a9e0-f07fa4d0ea64" path="/var/lib/kubelet/pods/63c41720-bda4-42fd-a9e0-f07fa4d0ea64/volumes" Nov 21 13:59:03 crc kubenswrapper[4675]: I1121 13:59:03.083403 4675 generic.go:334] "Generic (PLEG): container finished" podID="5e9e808c-bda6-4e6d-bd55-d94d426574c3" containerID="6be953ebc702dbb085f2794e1ff331a85dc7e242bcd41aae2ced637e99f4cba6" exitCode=0 Nov 21 13:59:03 crc kubenswrapper[4675]: I1121 13:59:03.084463 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58fb49d57b-hpfw5" event={"ID":"5e9e808c-bda6-4e6d-bd55-d94d426574c3","Type":"ContainerDied","Data":"6be953ebc702dbb085f2794e1ff331a85dc7e242bcd41aae2ced637e99f4cba6"} Nov 21 13:59:03 crc kubenswrapper[4675]: I1121 13:59:03.401488 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:59:03 crc kubenswrapper[4675]: I1121 13:59:03.469949 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:59:03 crc kubenswrapper[4675]: I1121 13:59:03.649177 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6kxg7"] Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.141353 4675 generic.go:334] "Generic (PLEG): container finished" podID="6a639838-cd16-4c4b-9a5c-a76396f54d9e" containerID="7a31cdb7d39fb102142177fd5eceae1869e3a38a86fd83285358ea2ec3ec5a4d" exitCode=0 Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.141641 4675 generic.go:334] "Generic (PLEG): container finished" podID="6a639838-cd16-4c4b-9a5c-a76396f54d9e" containerID="ee9f3c2218e317e0924e0f7bf88a9b86bf0be4120eb523fc9bf0eae71a1fa8a9" exitCode=0 Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.142761 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a639838-cd16-4c4b-9a5c-a76396f54d9e","Type":"ContainerDied","Data":"7a31cdb7d39fb102142177fd5eceae1869e3a38a86fd83285358ea2ec3ec5a4d"} Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.142797 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a639838-cd16-4c4b-9a5c-a76396f54d9e","Type":"ContainerDied","Data":"ee9f3c2218e317e0924e0f7bf88a9b86bf0be4120eb523fc9bf0eae71a1fa8a9"} Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.386927 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.543777 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-config-data-custom\") pod \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.543834 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6vbm\" (UniqueName: \"kubernetes.io/projected/6a639838-cd16-4c4b-9a5c-a76396f54d9e-kube-api-access-p6vbm\") pod \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.543988 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-combined-ca-bundle\") pod \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.544061 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-config-data\") pod \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.544099 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a639838-cd16-4c4b-9a5c-a76396f54d9e-etc-machine-id\") pod \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.544135 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-scripts\") pod \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\" (UID: \"6a639838-cd16-4c4b-9a5c-a76396f54d9e\") " Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.545768 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a639838-cd16-4c4b-9a5c-a76396f54d9e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6a639838-cd16-4c4b-9a5c-a76396f54d9e" (UID: "6a639838-cd16-4c4b-9a5c-a76396f54d9e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.552839 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6a639838-cd16-4c4b-9a5c-a76396f54d9e" (UID: "6a639838-cd16-4c4b-9a5c-a76396f54d9e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.560769 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-scripts" (OuterVolumeSpecName: "scripts") pod "6a639838-cd16-4c4b-9a5c-a76396f54d9e" (UID: "6a639838-cd16-4c4b-9a5c-a76396f54d9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.565460 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a639838-cd16-4c4b-9a5c-a76396f54d9e-kube-api-access-p6vbm" (OuterVolumeSpecName: "kube-api-access-p6vbm") pod "6a639838-cd16-4c4b-9a5c-a76396f54d9e" (UID: "6a639838-cd16-4c4b-9a5c-a76396f54d9e"). InnerVolumeSpecName "kube-api-access-p6vbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.649685 4675 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a639838-cd16-4c4b-9a5c-a76396f54d9e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.650076 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.650093 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.650103 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6vbm\" (UniqueName: \"kubernetes.io/projected/6a639838-cd16-4c4b-9a5c-a76396f54d9e-kube-api-access-p6vbm\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.680545 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a639838-cd16-4c4b-9a5c-a76396f54d9e" (UID: "6a639838-cd16-4c4b-9a5c-a76396f54d9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.741134 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-config-data" (OuterVolumeSpecName: "config-data") pod "6a639838-cd16-4c4b-9a5c-a76396f54d9e" (UID: "6a639838-cd16-4c4b-9a5c-a76396f54d9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.752555 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:04 crc kubenswrapper[4675]: I1121 13:59:04.752588 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a639838-cd16-4c4b-9a5c-a76396f54d9e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.157060 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a639838-cd16-4c4b-9a5c-a76396f54d9e","Type":"ContainerDied","Data":"9731978e9f7f0fa58b9202557fda03f168db2c0882f5903ee3f692dbe9b40486"} Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.157121 4675 scope.go:117] "RemoveContainer" containerID="7a31cdb7d39fb102142177fd5eceae1869e3a38a86fd83285358ea2ec3ec5a4d" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.157239 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.164774 4675 generic.go:334] "Generic (PLEG): container finished" podID="5e9e808c-bda6-4e6d-bd55-d94d426574c3" containerID="0afd3aed69fe00b4a8898f6932f4eac85e152ed13548de4023674c9b51fce34d" exitCode=0 Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.164980 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6kxg7" podUID="78391e3d-3bab-469a-a163-3729fdf23773" containerName="registry-server" containerID="cri-o://09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7" gracePeriod=2 Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.165402 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58fb49d57b-hpfw5" event={"ID":"5e9e808c-bda6-4e6d-bd55-d94d426574c3","Type":"ContainerDied","Data":"0afd3aed69fe00b4a8898f6932f4eac85e152ed13548de4023674c9b51fce34d"} Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.235244 4675 scope.go:117] "RemoveContainer" containerID="ee9f3c2218e317e0924e0f7bf88a9b86bf0be4120eb523fc9bf0eae71a1fa8a9" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.291820 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.319168 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.366034 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:59:05 crc kubenswrapper[4675]: E1121 13:59:05.367911 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a639838-cd16-4c4b-9a5c-a76396f54d9e" containerName="probe" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.368042 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a639838-cd16-4c4b-9a5c-a76396f54d9e" containerName="probe" Nov 21 13:59:05 crc kubenswrapper[4675]: E1121 13:59:05.368159 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c41720-bda4-42fd-a9e0-f07fa4d0ea64" containerName="dnsmasq-dns" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.368245 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c41720-bda4-42fd-a9e0-f07fa4d0ea64" containerName="dnsmasq-dns" Nov 21 13:59:05 crc kubenswrapper[4675]: E1121 13:59:05.368336 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a639838-cd16-4c4b-9a5c-a76396f54d9e" containerName="cinder-scheduler" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.368415 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a639838-cd16-4c4b-9a5c-a76396f54d9e" containerName="cinder-scheduler" Nov 21 13:59:05 crc kubenswrapper[4675]: E1121 13:59:05.368563 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c41720-bda4-42fd-a9e0-f07fa4d0ea64" containerName="init" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.368643 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c41720-bda4-42fd-a9e0-f07fa4d0ea64" containerName="init" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.369362 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a639838-cd16-4c4b-9a5c-a76396f54d9e" containerName="probe" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.369534 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a639838-cd16-4c4b-9a5c-a76396f54d9e" containerName="cinder-scheduler" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.369654 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c41720-bda4-42fd-a9e0-f07fa4d0ea64" containerName="dnsmasq-dns" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.372753 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.380595 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.449106 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.491453 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-866np\" (UniqueName: \"kubernetes.io/projected/2c8951ed-3fad-45f7-ab94-b1843d1c4114-kube-api-access-866np\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.491660 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8951ed-3fad-45f7-ab94-b1843d1c4114-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.491843 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8951ed-3fad-45f7-ab94-b1843d1c4114-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.491958 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c8951ed-3fad-45f7-ab94-b1843d1c4114-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.492854 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c8951ed-3fad-45f7-ab94-b1843d1c4114-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.493018 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c8951ed-3fad-45f7-ab94-b1843d1c4114-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.596289 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8951ed-3fad-45f7-ab94-b1843d1c4114-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.596946 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c8951ed-3fad-45f7-ab94-b1843d1c4114-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.597054 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c8951ed-3fad-45f7-ab94-b1843d1c4114-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.597108 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c8951ed-3fad-45f7-ab94-b1843d1c4114-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.597202 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-866np\" (UniqueName: \"kubernetes.io/projected/2c8951ed-3fad-45f7-ab94-b1843d1c4114-kube-api-access-866np\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.597252 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8951ed-3fad-45f7-ab94-b1843d1c4114-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.598221 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c8951ed-3fad-45f7-ab94-b1843d1c4114-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.605373 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c8951ed-3fad-45f7-ab94-b1843d1c4114-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.605392 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c8951ed-3fad-45f7-ab94-b1843d1c4114-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.605986 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8951ed-3fad-45f7-ab94-b1843d1c4114-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.608028 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8951ed-3fad-45f7-ab94-b1843d1c4114-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.626736 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-866np\" (UniqueName: \"kubernetes.io/projected/2c8951ed-3fad-45f7-ab94-b1843d1c4114-kube-api-access-866np\") pod \"cinder-scheduler-0\" (UID: \"2c8951ed-3fad-45f7-ab94-b1843d1c4114\") " pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.714483 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.735570 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.801159 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-httpd-config\") pod \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.801304 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-combined-ca-bundle\") pod \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.801446 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-ovndb-tls-certs\") pod \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.801506 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j54zc\" (UniqueName: \"kubernetes.io/projected/5e9e808c-bda6-4e6d-bd55-d94d426574c3-kube-api-access-j54zc\") pod \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.801572 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-config\") pod \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\" (UID: \"5e9e808c-bda6-4e6d-bd55-d94d426574c3\") " Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.808098 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5e9e808c-bda6-4e6d-bd55-d94d426574c3" (UID: "5e9e808c-bda6-4e6d-bd55-d94d426574c3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.820026 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e9e808c-bda6-4e6d-bd55-d94d426574c3-kube-api-access-j54zc" (OuterVolumeSpecName: "kube-api-access-j54zc") pod "5e9e808c-bda6-4e6d-bd55-d94d426574c3" (UID: "5e9e808c-bda6-4e6d-bd55-d94d426574c3"). InnerVolumeSpecName "kube-api-access-j54zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.902606 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.907009 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j54zc\" (UniqueName: \"kubernetes.io/projected/5e9e808c-bda6-4e6d-bd55-d94d426574c3-kube-api-access-j54zc\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.907044 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.916270 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-config" (OuterVolumeSpecName: "config") pod "5e9e808c-bda6-4e6d-bd55-d94d426574c3" (UID: "5e9e808c-bda6-4e6d-bd55-d94d426574c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:05 crc kubenswrapper[4675]: I1121 13:59:05.920355 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e9e808c-bda6-4e6d-bd55-d94d426574c3" (UID: "5e9e808c-bda6-4e6d-bd55-d94d426574c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.008201 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d488b\" (UniqueName: \"kubernetes.io/projected/78391e3d-3bab-469a-a163-3729fdf23773-kube-api-access-d488b\") pod \"78391e3d-3bab-469a-a163-3729fdf23773\" (UID: \"78391e3d-3bab-469a-a163-3729fdf23773\") " Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.008403 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78391e3d-3bab-469a-a163-3729fdf23773-catalog-content\") pod \"78391e3d-3bab-469a-a163-3729fdf23773\" (UID: \"78391e3d-3bab-469a-a163-3729fdf23773\") " Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.008444 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78391e3d-3bab-469a-a163-3729fdf23773-utilities\") pod \"78391e3d-3bab-469a-a163-3729fdf23773\" (UID: \"78391e3d-3bab-469a-a163-3729fdf23773\") " Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.009295 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.009316 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.010976 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78391e3d-3bab-469a-a163-3729fdf23773-utilities" (OuterVolumeSpecName: "utilities") pod "78391e3d-3bab-469a-a163-3729fdf23773" (UID: "78391e3d-3bab-469a-a163-3729fdf23773"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.030866 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5e9e808c-bda6-4e6d-bd55-d94d426574c3" (UID: "5e9e808c-bda6-4e6d-bd55-d94d426574c3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.032202 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78391e3d-3bab-469a-a163-3729fdf23773-kube-api-access-d488b" (OuterVolumeSpecName: "kube-api-access-d488b") pod "78391e3d-3bab-469a-a163-3729fdf23773" (UID: "78391e3d-3bab-469a-a163-3729fdf23773"). InnerVolumeSpecName "kube-api-access-d488b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.066313 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78391e3d-3bab-469a-a163-3729fdf23773-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78391e3d-3bab-469a-a163-3729fdf23773" (UID: "78391e3d-3bab-469a-a163-3729fdf23773"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.113571 4675 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e9e808c-bda6-4e6d-bd55-d94d426574c3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.113593 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d488b\" (UniqueName: \"kubernetes.io/projected/78391e3d-3bab-469a-a163-3729fdf23773-kube-api-access-d488b\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.113605 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78391e3d-3bab-469a-a163-3729fdf23773-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.113613 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78391e3d-3bab-469a-a163-3729fdf23773-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.151204 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="b7c2373f-bce5-424a-9747-56897bb05444" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.200:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.185519 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58fb49d57b-hpfw5" event={"ID":"5e9e808c-bda6-4e6d-bd55-d94d426574c3","Type":"ContainerDied","Data":"5a1a475c9a2dc752935b32864a4fd78b5d4479bc04bba7368b30ec1412297118"} Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.185584 4675 scope.go:117] "RemoveContainer" containerID="6be953ebc702dbb085f2794e1ff331a85dc7e242bcd41aae2ced637e99f4cba6" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.185751 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58fb49d57b-hpfw5" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.202652 4675 generic.go:334] "Generic (PLEG): container finished" podID="78391e3d-3bab-469a-a163-3729fdf23773" containerID="09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7" exitCode=0 Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.202859 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6kxg7" event={"ID":"78391e3d-3bab-469a-a163-3729fdf23773","Type":"ContainerDied","Data":"09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7"} Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.202885 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6kxg7" event={"ID":"78391e3d-3bab-469a-a163-3729fdf23773","Type":"ContainerDied","Data":"5674ed90fcb575f98d0e0c5c11484815867539c582fdb5f7d4c6a45728d31a35"} Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.202984 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6kxg7" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.221908 4675 scope.go:117] "RemoveContainer" containerID="0afd3aed69fe00b4a8898f6932f4eac85e152ed13548de4023674c9b51fce34d" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.229534 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58fb49d57b-hpfw5"] Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.241590 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-58fb49d57b-hpfw5"] Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.255467 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6kxg7"] Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.267183 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6kxg7"] Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.271149 4675 scope.go:117] "RemoveContainer" containerID="09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.318216 4675 scope.go:117] "RemoveContainer" containerID="e0ec2d387fc60686fbe58f721a538ed520400da9c4ca44300623a5501e9f2444" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.366187 4675 scope.go:117] "RemoveContainer" containerID="cea73c724bad74edea502d5246460078b3bb6d6fd8b1bc6b2c8db476721ac7b2" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.373656 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.396705 4675 scope.go:117] "RemoveContainer" containerID="09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7" Nov 21 13:59:06 crc kubenswrapper[4675]: E1121 13:59:06.397123 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7\": container with ID starting with 09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7 not found: ID does not exist" containerID="09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.397164 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7"} err="failed to get container status \"09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7\": rpc error: code = NotFound desc = could not find container \"09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7\": container with ID starting with 09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7 not found: ID does not exist" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.397188 4675 scope.go:117] "RemoveContainer" containerID="e0ec2d387fc60686fbe58f721a538ed520400da9c4ca44300623a5501e9f2444" Nov 21 13:59:06 crc kubenswrapper[4675]: E1121 13:59:06.397636 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0ec2d387fc60686fbe58f721a538ed520400da9c4ca44300623a5501e9f2444\": container with ID starting with e0ec2d387fc60686fbe58f721a538ed520400da9c4ca44300623a5501e9f2444 not found: ID does not exist" containerID="e0ec2d387fc60686fbe58f721a538ed520400da9c4ca44300623a5501e9f2444" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.397663 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0ec2d387fc60686fbe58f721a538ed520400da9c4ca44300623a5501e9f2444"} err="failed to get container status \"e0ec2d387fc60686fbe58f721a538ed520400da9c4ca44300623a5501e9f2444\": rpc error: code = NotFound desc = could not find container \"e0ec2d387fc60686fbe58f721a538ed520400da9c4ca44300623a5501e9f2444\": container with ID starting with e0ec2d387fc60686fbe58f721a538ed520400da9c4ca44300623a5501e9f2444 not found: ID does not exist" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.397682 4675 scope.go:117] "RemoveContainer" containerID="cea73c724bad74edea502d5246460078b3bb6d6fd8b1bc6b2c8db476721ac7b2" Nov 21 13:59:06 crc kubenswrapper[4675]: E1121 13:59:06.398661 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea73c724bad74edea502d5246460078b3bb6d6fd8b1bc6b2c8db476721ac7b2\": container with ID starting with cea73c724bad74edea502d5246460078b3bb6d6fd8b1bc6b2c8db476721ac7b2 not found: ID does not exist" containerID="cea73c724bad74edea502d5246460078b3bb6d6fd8b1bc6b2c8db476721ac7b2" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.398684 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea73c724bad74edea502d5246460078b3bb6d6fd8b1bc6b2c8db476721ac7b2"} err="failed to get container status \"cea73c724bad74edea502d5246460078b3bb6d6fd8b1bc6b2c8db476721ac7b2\": rpc error: code = NotFound desc = could not find container \"cea73c724bad74edea502d5246460078b3bb6d6fd8b1bc6b2c8db476721ac7b2\": container with ID starting with cea73c724bad74edea502d5246460078b3bb6d6fd8b1bc6b2c8db476721ac7b2 not found: ID does not exist" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.558739 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7ff9b4b9fd-shbm9" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.864736 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e9e808c-bda6-4e6d-bd55-d94d426574c3" path="/var/lib/kubelet/pods/5e9e808c-bda6-4e6d-bd55-d94d426574c3/volumes" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.865512 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a639838-cd16-4c4b-9a5c-a76396f54d9e" path="/var/lib/kubelet/pods/6a639838-cd16-4c4b-9a5c-a76396f54d9e/volumes" Nov 21 13:59:06 crc kubenswrapper[4675]: I1121 13:59:06.866162 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78391e3d-3bab-469a-a163-3729fdf23773" path="/var/lib/kubelet/pods/78391e3d-3bab-469a-a163-3729fdf23773/volumes" Nov 21 13:59:07 crc kubenswrapper[4675]: I1121 13:59:07.239888 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c8951ed-3fad-45f7-ab94-b1843d1c4114","Type":"ContainerStarted","Data":"e3950b21c337288568330303393b0f3546bf49a0f3bdc5b1d0ed826717c42ad4"} Nov 21 13:59:07 crc kubenswrapper[4675]: I1121 13:59:07.239947 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c8951ed-3fad-45f7-ab94-b1843d1c4114","Type":"ContainerStarted","Data":"3ec3a28c0f091beb85517bd0ec4fae2190ae724dca6e0035f3b9eaeb030e13a2"} Nov 21 13:59:07 crc kubenswrapper[4675]: I1121 13:59:07.708672 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:59:08 crc kubenswrapper[4675]: I1121 13:59:08.009521 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85bfdcc858-xc95t" Nov 21 13:59:08 crc kubenswrapper[4675]: I1121 13:59:08.103730 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74b6656858-qfxkd"] Nov 21 13:59:08 crc kubenswrapper[4675]: I1121 13:59:08.103945 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74b6656858-qfxkd" podUID="cefea82b-2103-4d35-a134-a1f96a2abc5f" containerName="barbican-api-log" containerID="cri-o://d4aa957b443d6796e49f622dbad17c869ebe5a4fd8c2ae3002391c5b8f4f2819" gracePeriod=30 Nov 21 13:59:08 crc kubenswrapper[4675]: I1121 13:59:08.105416 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74b6656858-qfxkd" podUID="cefea82b-2103-4d35-a134-a1f96a2abc5f" containerName="barbican-api" containerID="cri-o://980c00486e8cbacbbdc77c27cb5b0acdcaa81e78a9f76deb4c806ec40a114280" gracePeriod=30 Nov 21 13:59:08 crc kubenswrapper[4675]: I1121 13:59:08.318391 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c8951ed-3fad-45f7-ab94-b1843d1c4114","Type":"ContainerStarted","Data":"12b15df9bc52109bf7101f4fe50dc2182dc97078716243f99c3f959153c1c6b3"} Nov 21 13:59:08 crc kubenswrapper[4675]: I1121 13:59:08.361805 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.361782315 podStartE2EDuration="3.361782315s" podCreationTimestamp="2025-11-21 13:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:08.349777945 +0000 UTC m=+1625.076192672" watchObservedRunningTime="2025-11-21 13:59:08.361782315 +0000 UTC m=+1625.088197042" Nov 21 13:59:08 crc kubenswrapper[4675]: I1121 13:59:08.364902 4675 generic.go:334] "Generic (PLEG): container finished" podID="cefea82b-2103-4d35-a134-a1f96a2abc5f" containerID="d4aa957b443d6796e49f622dbad17c869ebe5a4fd8c2ae3002391c5b8f4f2819" exitCode=143 Nov 21 13:59:08 crc kubenswrapper[4675]: I1121 13:59:08.365188 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b6656858-qfxkd" event={"ID":"cefea82b-2103-4d35-a134-a1f96a2abc5f","Type":"ContainerDied","Data":"d4aa957b443d6796e49f622dbad17c869ebe5a4fd8c2ae3002391c5b8f4f2819"} Nov 21 13:59:09 crc kubenswrapper[4675]: I1121 13:59:09.119750 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.737150 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.790909 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 21 13:59:10 crc kubenswrapper[4675]: E1121 13:59:10.791488 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e9e808c-bda6-4e6d-bd55-d94d426574c3" containerName="neutron-httpd" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.791510 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e9e808c-bda6-4e6d-bd55-d94d426574c3" containerName="neutron-httpd" Nov 21 13:59:10 crc kubenswrapper[4675]: E1121 13:59:10.791525 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e9e808c-bda6-4e6d-bd55-d94d426574c3" containerName="neutron-api" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.791533 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e9e808c-bda6-4e6d-bd55-d94d426574c3" containerName="neutron-api" Nov 21 13:59:10 crc kubenswrapper[4675]: E1121 13:59:10.791553 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78391e3d-3bab-469a-a163-3729fdf23773" containerName="extract-content" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.791561 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="78391e3d-3bab-469a-a163-3729fdf23773" containerName="extract-content" Nov 21 13:59:10 crc kubenswrapper[4675]: E1121 13:59:10.791579 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78391e3d-3bab-469a-a163-3729fdf23773" containerName="extract-utilities" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.791586 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="78391e3d-3bab-469a-a163-3729fdf23773" containerName="extract-utilities" Nov 21 13:59:10 crc kubenswrapper[4675]: E1121 13:59:10.791643 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78391e3d-3bab-469a-a163-3729fdf23773" containerName="registry-server" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.791652 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="78391e3d-3bab-469a-a163-3729fdf23773" containerName="registry-server" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.791957 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e9e808c-bda6-4e6d-bd55-d94d426574c3" containerName="neutron-api" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.792000 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e9e808c-bda6-4e6d-bd55-d94d426574c3" containerName="neutron-httpd" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.792026 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="78391e3d-3bab-469a-a163-3729fdf23773" containerName="registry-server" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.793005 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.794963 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.795103 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-hdk7z" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.824090 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.825016 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.848458 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6ec2a5-ea89-459f-b66c-4822e68f1498-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b6ec2a5-ea89-459f-b66c-4822e68f1498\") " pod="openstack/openstackclient" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.848767 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b6ec2a5-ea89-459f-b66c-4822e68f1498-openstack-config\") pod \"openstackclient\" (UID: \"3b6ec2a5-ea89-459f-b66c-4822e68f1498\") " pod="openstack/openstackclient" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.850530 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b6ec2a5-ea89-459f-b66c-4822e68f1498-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b6ec2a5-ea89-459f-b66c-4822e68f1498\") " pod="openstack/openstackclient" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.850567 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n8wj\" (UniqueName: \"kubernetes.io/projected/3b6ec2a5-ea89-459f-b66c-4822e68f1498-kube-api-access-4n8wj\") pod \"openstackclient\" (UID: \"3b6ec2a5-ea89-459f-b66c-4822e68f1498\") " pod="openstack/openstackclient" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.953242 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6ec2a5-ea89-459f-b66c-4822e68f1498-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b6ec2a5-ea89-459f-b66c-4822e68f1498\") " pod="openstack/openstackclient" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.953462 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b6ec2a5-ea89-459f-b66c-4822e68f1498-openstack-config\") pod \"openstackclient\" (UID: \"3b6ec2a5-ea89-459f-b66c-4822e68f1498\") " pod="openstack/openstackclient" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.953511 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b6ec2a5-ea89-459f-b66c-4822e68f1498-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b6ec2a5-ea89-459f-b66c-4822e68f1498\") " pod="openstack/openstackclient" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.953533 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n8wj\" (UniqueName: \"kubernetes.io/projected/3b6ec2a5-ea89-459f-b66c-4822e68f1498-kube-api-access-4n8wj\") pod \"openstackclient\" (UID: \"3b6ec2a5-ea89-459f-b66c-4822e68f1498\") " pod="openstack/openstackclient" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.956090 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b6ec2a5-ea89-459f-b66c-4822e68f1498-openstack-config\") pod \"openstackclient\" (UID: \"3b6ec2a5-ea89-459f-b66c-4822e68f1498\") " pod="openstack/openstackclient" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.960708 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b6ec2a5-ea89-459f-b66c-4822e68f1498-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b6ec2a5-ea89-459f-b66c-4822e68f1498\") " pod="openstack/openstackclient" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.961666 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b6ec2a5-ea89-459f-b66c-4822e68f1498-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b6ec2a5-ea89-459f-b66c-4822e68f1498\") " pod="openstack/openstackclient" Nov 21 13:59:10 crc kubenswrapper[4675]: I1121 13:59:10.972242 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n8wj\" (UniqueName: \"kubernetes.io/projected/3b6ec2a5-ea89-459f-b66c-4822e68f1498-kube-api-access-4n8wj\") pod \"openstackclient\" (UID: \"3b6ec2a5-ea89-459f-b66c-4822e68f1498\") " pod="openstack/openstackclient" Nov 21 13:59:11 crc kubenswrapper[4675]: I1121 13:59:11.131466 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 13:59:11 crc kubenswrapper[4675]: I1121 13:59:11.692648 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74b6656858-qfxkd" podUID="cefea82b-2103-4d35-a134-a1f96a2abc5f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.197:9311/healthcheck\": read tcp 10.217.0.2:49596->10.217.0.197:9311: read: connection reset by peer" Nov 21 13:59:11 crc kubenswrapper[4675]: I1121 13:59:11.692681 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-74b6656858-qfxkd" podUID="cefea82b-2103-4d35-a134-a1f96a2abc5f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.197:9311/healthcheck\": read tcp 10.217.0.2:49588->10.217.0.197:9311: read: connection reset by peer" Nov 21 13:59:12 crc kubenswrapper[4675]: I1121 13:59:12.018520 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 13:59:12 crc kubenswrapper[4675]: I1121 13:59:12.431451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3b6ec2a5-ea89-459f-b66c-4822e68f1498","Type":"ContainerStarted","Data":"ebfde01aa844c484b4aee89475da547ac1f1f446bafbc6ab99a9f4cb0bf799fd"} Nov 21 13:59:12 crc kubenswrapper[4675]: I1121 13:59:12.434654 4675 generic.go:334] "Generic (PLEG): container finished" podID="cefea82b-2103-4d35-a134-a1f96a2abc5f" containerID="980c00486e8cbacbbdc77c27cb5b0acdcaa81e78a9f76deb4c806ec40a114280" exitCode=0 Nov 21 13:59:12 crc kubenswrapper[4675]: I1121 13:59:12.434696 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b6656858-qfxkd" event={"ID":"cefea82b-2103-4d35-a134-a1f96a2abc5f","Type":"ContainerDied","Data":"980c00486e8cbacbbdc77c27cb5b0acdcaa81e78a9f76deb4c806ec40a114280"} Nov 21 13:59:12 crc kubenswrapper[4675]: I1121 13:59:12.894402 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.004257 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmxw\" (UniqueName: \"kubernetes.io/projected/cefea82b-2103-4d35-a134-a1f96a2abc5f-kube-api-access-nmmxw\") pod \"cefea82b-2103-4d35-a134-a1f96a2abc5f\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.004665 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-config-data-custom\") pod \"cefea82b-2103-4d35-a134-a1f96a2abc5f\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.004770 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-combined-ca-bundle\") pod \"cefea82b-2103-4d35-a134-a1f96a2abc5f\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.004918 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-config-data\") pod \"cefea82b-2103-4d35-a134-a1f96a2abc5f\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.005399 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cefea82b-2103-4d35-a134-a1f96a2abc5f-logs\") pod \"cefea82b-2103-4d35-a134-a1f96a2abc5f\" (UID: \"cefea82b-2103-4d35-a134-a1f96a2abc5f\") " Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.010796 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cefea82b-2103-4d35-a134-a1f96a2abc5f-kube-api-access-nmmxw" (OuterVolumeSpecName: "kube-api-access-nmmxw") pod "cefea82b-2103-4d35-a134-a1f96a2abc5f" (UID: "cefea82b-2103-4d35-a134-a1f96a2abc5f"). InnerVolumeSpecName "kube-api-access-nmmxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.011830 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cefea82b-2103-4d35-a134-a1f96a2abc5f-logs" (OuterVolumeSpecName: "logs") pod "cefea82b-2103-4d35-a134-a1f96a2abc5f" (UID: "cefea82b-2103-4d35-a134-a1f96a2abc5f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.016244 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cefea82b-2103-4d35-a134-a1f96a2abc5f" (UID: "cefea82b-2103-4d35-a134-a1f96a2abc5f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.041716 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cefea82b-2103-4d35-a134-a1f96a2abc5f" (UID: "cefea82b-2103-4d35-a134-a1f96a2abc5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.070359 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-config-data" (OuterVolumeSpecName: "config-data") pod "cefea82b-2103-4d35-a134-a1f96a2abc5f" (UID: "cefea82b-2103-4d35-a134-a1f96a2abc5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.110682 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.110716 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cefea82b-2103-4d35-a134-a1f96a2abc5f-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.110730 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmmxw\" (UniqueName: \"kubernetes.io/projected/cefea82b-2103-4d35-a134-a1f96a2abc5f-kube-api-access-nmmxw\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.110743 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.110754 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefea82b-2103-4d35-a134-a1f96a2abc5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:13 crc kubenswrapper[4675]: E1121 13:59:13.334434 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-conmon-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.448907 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b6656858-qfxkd" event={"ID":"cefea82b-2103-4d35-a134-a1f96a2abc5f","Type":"ContainerDied","Data":"27ba54ee79cc0894047ed3b5a4c68cdc7b86a5729ba32cd46e93395caccc4944"} Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.448991 4675 scope.go:117] "RemoveContainer" containerID="980c00486e8cbacbbdc77c27cb5b0acdcaa81e78a9f76deb4c806ec40a114280" Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.449014 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74b6656858-qfxkd" Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.489831 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74b6656858-qfxkd"] Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.502258 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-74b6656858-qfxkd"] Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.507453 4675 scope.go:117] "RemoveContainer" containerID="d4aa957b443d6796e49f622dbad17c869ebe5a4fd8c2ae3002391c5b8f4f2819" Nov 21 13:59:13 crc kubenswrapper[4675]: I1121 13:59:13.998935 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-79c7749954-ksq5g"] Nov 21 13:59:14 crc kubenswrapper[4675]: E1121 13:59:14.000271 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cefea82b-2103-4d35-a134-a1f96a2abc5f" containerName="barbican-api-log" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.000383 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="cefea82b-2103-4d35-a134-a1f96a2abc5f" containerName="barbican-api-log" Nov 21 13:59:14 crc kubenswrapper[4675]: E1121 13:59:14.000476 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cefea82b-2103-4d35-a134-a1f96a2abc5f" containerName="barbican-api" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.000559 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="cefea82b-2103-4d35-a134-a1f96a2abc5f" containerName="barbican-api" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.000951 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="cefea82b-2103-4d35-a134-a1f96a2abc5f" containerName="barbican-api-log" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.001058 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="cefea82b-2103-4d35-a134-a1f96a2abc5f" containerName="barbican-api" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.002264 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.006602 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.006655 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-wcdqh" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.006777 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.038444 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-79c7749954-ksq5g"] Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.051041 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7sqs\" (UniqueName: \"kubernetes.io/projected/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-kube-api-access-h7sqs\") pod \"heat-engine-79c7749954-ksq5g\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.056943 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-combined-ca-bundle\") pod \"heat-engine-79c7749954-ksq5g\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.057050 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-config-data-custom\") pod \"heat-engine-79c7749954-ksq5g\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.057175 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-config-data\") pod \"heat-engine-79c7749954-ksq5g\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.159377 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-config-data-custom\") pod \"heat-engine-79c7749954-ksq5g\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.159452 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-config-data\") pod \"heat-engine-79c7749954-ksq5g\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.159599 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7sqs\" (UniqueName: \"kubernetes.io/projected/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-kube-api-access-h7sqs\") pod \"heat-engine-79c7749954-ksq5g\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.159632 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-combined-ca-bundle\") pod \"heat-engine-79c7749954-ksq5g\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.171345 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ptgnp"] Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.174449 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.177845 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-config-data-custom\") pod \"heat-engine-79c7749954-ksq5g\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.178136 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-combined-ca-bundle\") pod \"heat-engine-79c7749954-ksq5g\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.190286 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-69bbd8cb64-j4z85"] Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.193258 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.202460 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.203493 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7sqs\" (UniqueName: \"kubernetes.io/projected/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-kube-api-access-h7sqs\") pod \"heat-engine-79c7749954-ksq5g\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.204264 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-config-data\") pod \"heat-engine-79c7749954-ksq5g\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.209541 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7c9cd5548d-qsvvl"] Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.211554 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.218391 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.233728 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-69bbd8cb64-j4z85"] Nov 21 13:59:14 crc kubenswrapper[4675]: E1121 13:59:14.253335 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-conmon-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.268666 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-combined-ca-bundle\") pod \"heat-api-7c9cd5548d-qsvvl\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.268725 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-config-data-custom\") pod \"heat-api-7c9cd5548d-qsvvl\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.268768 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-combined-ca-bundle\") pod \"heat-cfnapi-69bbd8cb64-j4z85\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.268838 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwc65\" (UniqueName: \"kubernetes.io/projected/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-kube-api-access-gwc65\") pod \"heat-cfnapi-69bbd8cb64-j4z85\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.268878 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-config-data\") pod \"heat-api-7c9cd5548d-qsvvl\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.268923 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.268990 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j42d\" (UniqueName: \"kubernetes.io/projected/86196f7d-6aff-4774-9ceb-7d5581f8d38a-kube-api-access-8j42d\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.269053 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf5pn\" (UniqueName: \"kubernetes.io/projected/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-kube-api-access-qf5pn\") pod \"heat-api-7c9cd5548d-qsvvl\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.269095 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.269123 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-config-data-custom\") pod \"heat-cfnapi-69bbd8cb64-j4z85\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.269150 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-config\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.269211 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.269237 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.269288 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-config-data\") pod \"heat-cfnapi-69bbd8cb64-j4z85\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.279882 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ptgnp"] Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.311143 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7c9cd5548d-qsvvl"] Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.343631 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.372701 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-config-data\") pod \"heat-cfnapi-69bbd8cb64-j4z85\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.372983 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-combined-ca-bundle\") pod \"heat-api-7c9cd5548d-qsvvl\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.383689 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-config-data-custom\") pod \"heat-api-7c9cd5548d-qsvvl\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.383870 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-combined-ca-bundle\") pod \"heat-cfnapi-69bbd8cb64-j4z85\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.384083 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwc65\" (UniqueName: \"kubernetes.io/projected/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-kube-api-access-gwc65\") pod \"heat-cfnapi-69bbd8cb64-j4z85\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.384221 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-config-data\") pod \"heat-api-7c9cd5548d-qsvvl\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.384381 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.384561 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j42d\" (UniqueName: \"kubernetes.io/projected/86196f7d-6aff-4774-9ceb-7d5581f8d38a-kube-api-access-8j42d\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.384746 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf5pn\" (UniqueName: \"kubernetes.io/projected/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-kube-api-access-qf5pn\") pod \"heat-api-7c9cd5548d-qsvvl\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.384868 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.384980 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-config-data-custom\") pod \"heat-cfnapi-69bbd8cb64-j4z85\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.385100 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-config\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.385288 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.385396 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.386551 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.387775 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.400560 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-config\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.401260 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.401671 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.402044 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-combined-ca-bundle\") pod \"heat-api-7c9cd5548d-qsvvl\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.428260 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-config-data\") pod \"heat-api-7c9cd5548d-qsvvl\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.429787 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-config-data-custom\") pod \"heat-api-7c9cd5548d-qsvvl\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.443796 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-combined-ca-bundle\") pod \"heat-cfnapi-69bbd8cb64-j4z85\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.449177 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-config-data\") pod \"heat-cfnapi-69bbd8cb64-j4z85\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.453909 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-config-data-custom\") pod \"heat-cfnapi-69bbd8cb64-j4z85\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.456310 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j42d\" (UniqueName: \"kubernetes.io/projected/86196f7d-6aff-4774-9ceb-7d5581f8d38a-kube-api-access-8j42d\") pod \"dnsmasq-dns-688b9f5b49-ptgnp\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.473055 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwc65\" (UniqueName: \"kubernetes.io/projected/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-kube-api-access-gwc65\") pod \"heat-cfnapi-69bbd8cb64-j4z85\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.483026 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf5pn\" (UniqueName: \"kubernetes.io/projected/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-kube-api-access-qf5pn\") pod \"heat-api-7c9cd5548d-qsvvl\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.643336 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.691956 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.692693 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:14 crc kubenswrapper[4675]: I1121 13:59:14.907352 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cefea82b-2103-4d35-a134-a1f96a2abc5f" path="/var/lib/kubelet/pods/cefea82b-2103-4d35-a134-a1f96a2abc5f/volumes" Nov 21 13:59:15 crc kubenswrapper[4675]: I1121 13:59:15.400170 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-79c7749954-ksq5g"] Nov 21 13:59:15 crc kubenswrapper[4675]: I1121 13:59:15.548686 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7c9cd5548d-qsvvl"] Nov 21 13:59:15 crc kubenswrapper[4675]: I1121 13:59:15.608163 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ptgnp"] Nov 21 13:59:15 crc kubenswrapper[4675]: I1121 13:59:15.611383 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79c7749954-ksq5g" event={"ID":"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2","Type":"ContainerStarted","Data":"d19c47947a733e918420c099d8765ac71233f205c6bccf1f3200f9a1ee82c396"} Nov 21 13:59:15 crc kubenswrapper[4675]: I1121 13:59:15.670813 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-69bbd8cb64-j4z85"] Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.031925 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.135849 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.135899 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.135938 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.136789 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.136864 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" gracePeriod=600 Nov 21 13:59:16 crc kubenswrapper[4675]: E1121 13:59:16.310170 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.639714 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" event={"ID":"6f3372bb-4733-4de4-b579-e9ede0ce2ed4","Type":"ContainerStarted","Data":"a1d5266e15e3292a64f75215fc58645808065cec909ed9c28ed3b510d8b8cb38"} Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.654896 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79c7749954-ksq5g" event={"ID":"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2","Type":"ContainerStarted","Data":"9ffebf05519607587377af6db5a1c610d349afb105e58b568650af6415061466"} Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.654945 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.670141 4675 generic.go:334] "Generic (PLEG): container finished" podID="86196f7d-6aff-4774-9ceb-7d5581f8d38a" containerID="e05441236adf386dd8932b252fa0d03c1f18e13ca3dd284dff0b87be7127e03b" exitCode=0 Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.670249 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" event={"ID":"86196f7d-6aff-4774-9ceb-7d5581f8d38a","Type":"ContainerDied","Data":"e05441236adf386dd8932b252fa0d03c1f18e13ca3dd284dff0b87be7127e03b"} Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.670277 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" event={"ID":"86196f7d-6aff-4774-9ceb-7d5581f8d38a","Type":"ContainerStarted","Data":"a688f913c9deafd2d490538ca83b668cb68f369183d03aa9910252b732e1dde1"} Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.674710 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-79c7749954-ksq5g" podStartSLOduration=3.674690465 podStartE2EDuration="3.674690465s" podCreationTimestamp="2025-11-21 13:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:16.670630334 +0000 UTC m=+1633.397045061" watchObservedRunningTime="2025-11-21 13:59:16.674690465 +0000 UTC m=+1633.401105192" Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.676586 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c9cd5548d-qsvvl" event={"ID":"55a4bbbf-724c-4ec4-95ce-0bc8395012f7","Type":"ContainerStarted","Data":"9d8e3e4c4ac4a51cfb5af357397bb80d3d34cbd039f2efb10d098f41343ef852"} Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.684013 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" exitCode=0 Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.684057 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4"} Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.684105 4675 scope.go:117] "RemoveContainer" containerID="4f2f7ddee4baba66416eb7233c361ee3ddc2444a945155131226bb7f36fc9024" Nov 21 13:59:16 crc kubenswrapper[4675]: I1121 13:59:16.685914 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 13:59:16 crc kubenswrapper[4675]: E1121 13:59:16.686285 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 13:59:17 crc kubenswrapper[4675]: I1121 13:59:17.592530 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 21 13:59:17 crc kubenswrapper[4675]: I1121 13:59:17.700418 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" event={"ID":"86196f7d-6aff-4774-9ceb-7d5581f8d38a","Type":"ContainerStarted","Data":"8b4a9f292ced3e486592fa94663b6791dfa5aefbd09a64098395e90f45ef4c65"} Nov 21 13:59:17 crc kubenswrapper[4675]: I1121 13:59:17.701190 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:17 crc kubenswrapper[4675]: I1121 13:59:17.736646 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" podStartSLOduration=3.7366286300000002 podStartE2EDuration="3.73662863s" podCreationTimestamp="2025-11-21 13:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:17.729509312 +0000 UTC m=+1634.455924039" watchObservedRunningTime="2025-11-21 13:59:17.73662863 +0000 UTC m=+1634.463043357" Nov 21 13:59:19 crc kubenswrapper[4675]: I1121 13:59:19.514500 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:19 crc kubenswrapper[4675]: I1121 13:59:19.515108 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="ceilometer-central-agent" containerID="cri-o://c098e4bd6707d7f500240a9676157dd14fcc74c844b5aa3d7316c14abb374ca6" gracePeriod=30 Nov 21 13:59:19 crc kubenswrapper[4675]: I1121 13:59:19.515524 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="proxy-httpd" containerID="cri-o://7b399597907711517d0542971cfa5b8956e074a0e36399f16c96ad892ef696a8" gracePeriod=30 Nov 21 13:59:19 crc kubenswrapper[4675]: I1121 13:59:19.515593 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="ceilometer-notification-agent" containerID="cri-o://a2eab34636acae345c92e2d8d0fd78a5a04945f4676ff2ad7f9782407e9d14cb" gracePeriod=30 Nov 21 13:59:19 crc kubenswrapper[4675]: I1121 13:59:19.515681 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="sg-core" containerID="cri-o://a16920f6623c44c18f72947a03388fb206f87260eb80bbaaf9dc97629a146bd4" gracePeriod=30 Nov 21 13:59:20 crc kubenswrapper[4675]: I1121 13:59:20.753486 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" event={"ID":"6f3372bb-4733-4de4-b579-e9ede0ce2ed4","Type":"ContainerStarted","Data":"3ef1ba932f4e8b915160b710f98a68a1851ea8eefca5698703a54140dd2190c8"} Nov 21 13:59:20 crc kubenswrapper[4675]: I1121 13:59:20.754478 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:20 crc kubenswrapper[4675]: I1121 13:59:20.757442 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c9cd5548d-qsvvl" event={"ID":"55a4bbbf-724c-4ec4-95ce-0bc8395012f7","Type":"ContainerStarted","Data":"a37f1a1a8902fdd6c0479b5a4f9f07d860b6c277a5c08afbe2a79a259a61de49"} Nov 21 13:59:20 crc kubenswrapper[4675]: I1121 13:59:20.758093 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:20 crc kubenswrapper[4675]: I1121 13:59:20.762516 4675 generic.go:334] "Generic (PLEG): container finished" podID="1a9c792a-f9a3-416b-b131-ac61338200da" containerID="7b399597907711517d0542971cfa5b8956e074a0e36399f16c96ad892ef696a8" exitCode=0 Nov 21 13:59:20 crc kubenswrapper[4675]: I1121 13:59:20.762544 4675 generic.go:334] "Generic (PLEG): container finished" podID="1a9c792a-f9a3-416b-b131-ac61338200da" containerID="a16920f6623c44c18f72947a03388fb206f87260eb80bbaaf9dc97629a146bd4" exitCode=2 Nov 21 13:59:20 crc kubenswrapper[4675]: I1121 13:59:20.762552 4675 generic.go:334] "Generic (PLEG): container finished" podID="1a9c792a-f9a3-416b-b131-ac61338200da" containerID="c098e4bd6707d7f500240a9676157dd14fcc74c844b5aa3d7316c14abb374ca6" exitCode=0 Nov 21 13:59:20 crc kubenswrapper[4675]: I1121 13:59:20.762578 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a9c792a-f9a3-416b-b131-ac61338200da","Type":"ContainerDied","Data":"7b399597907711517d0542971cfa5b8956e074a0e36399f16c96ad892ef696a8"} Nov 21 13:59:20 crc kubenswrapper[4675]: I1121 13:59:20.762598 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a9c792a-f9a3-416b-b131-ac61338200da","Type":"ContainerDied","Data":"a16920f6623c44c18f72947a03388fb206f87260eb80bbaaf9dc97629a146bd4"} Nov 21 13:59:20 crc kubenswrapper[4675]: I1121 13:59:20.762610 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a9c792a-f9a3-416b-b131-ac61338200da","Type":"ContainerDied","Data":"c098e4bd6707d7f500240a9676157dd14fcc74c844b5aa3d7316c14abb374ca6"} Nov 21 13:59:20 crc kubenswrapper[4675]: I1121 13:59:20.791508 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" podStartSLOduration=2.609273872 podStartE2EDuration="6.79148302s" podCreationTimestamp="2025-11-21 13:59:14 +0000 UTC" firstStartedPulling="2025-11-21 13:59:15.702104854 +0000 UTC m=+1632.428519591" lastFinishedPulling="2025-11-21 13:59:19.884314012 +0000 UTC m=+1636.610728739" observedRunningTime="2025-11-21 13:59:20.76947737 +0000 UTC m=+1637.495892097" watchObservedRunningTime="2025-11-21 13:59:20.79148302 +0000 UTC m=+1637.517897747" Nov 21 13:59:20 crc kubenswrapper[4675]: I1121 13:59:20.813157 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7c9cd5548d-qsvvl" podStartSLOduration=2.516661788 podStartE2EDuration="6.813136691s" podCreationTimestamp="2025-11-21 13:59:14 +0000 UTC" firstStartedPulling="2025-11-21 13:59:15.582382103 +0000 UTC m=+1632.308796820" lastFinishedPulling="2025-11-21 13:59:19.878856996 +0000 UTC m=+1636.605271723" observedRunningTime="2025-11-21 13:59:20.796489445 +0000 UTC m=+1637.522904172" watchObservedRunningTime="2025-11-21 13:59:20.813136691 +0000 UTC m=+1637.539551418" Nov 21 13:59:21 crc kubenswrapper[4675]: I1121 13:59:21.777498 4675 generic.go:334] "Generic (PLEG): container finished" podID="1a9c792a-f9a3-416b-b131-ac61338200da" containerID="a2eab34636acae345c92e2d8d0fd78a5a04945f4676ff2ad7f9782407e9d14cb" exitCode=0 Nov 21 13:59:21 crc kubenswrapper[4675]: I1121 13:59:21.777545 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a9c792a-f9a3-416b-b131-ac61338200da","Type":"ContainerDied","Data":"a2eab34636acae345c92e2d8d0fd78a5a04945f4676ff2ad7f9782407e9d14cb"} Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.565311 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5cb8d89bd7-jsjdv"] Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.567396 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.570160 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.570443 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.570564 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.593310 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5cb8d89bd7-jsjdv"] Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.636921 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-internal-tls-certs\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.636997 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-public-tls-certs\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.637054 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-etc-swift\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.637107 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-config-data\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.637258 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbl85\" (UniqueName: \"kubernetes.io/projected/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-kube-api-access-tbl85\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.637287 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-log-httpd\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.637323 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-combined-ca-bundle\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.637363 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-run-httpd\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.739345 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-etc-swift\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.739647 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-config-data\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.739984 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbl85\" (UniqueName: \"kubernetes.io/projected/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-kube-api-access-tbl85\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.740120 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-log-httpd\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.740627 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-combined-ca-bundle\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.740800 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-run-httpd\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.741014 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-internal-tls-certs\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.741192 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-public-tls-certs\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.741299 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-run-httpd\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.740814 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-log-httpd\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.751003 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-internal-tls-certs\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.752826 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-public-tls-certs\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.753156 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-config-data\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.753223 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-etc-swift\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.753518 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-combined-ca-bundle\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.774913 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbl85\" (UniqueName: \"kubernetes.io/projected/35b58484-6cb2-4edc-bea9-4d3a8d6b1479-kube-api-access-tbl85\") pod \"swift-proxy-5cb8d89bd7-jsjdv\" (UID: \"35b58484-6cb2-4edc-bea9-4d3a8d6b1479\") " pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.908586 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.910980 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7498687b57-vr4xt"] Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.920229 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.943665 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-864b497b7c-2zlp2"] Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.952358 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.970550 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7498687b57-vr4xt"] Nov 21 13:59:22 crc kubenswrapper[4675]: I1121 13:59:22.997442 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-864b497b7c-2zlp2"] Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.010438 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5856d7f7bf-rsjh7"] Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.012175 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.033116 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5856d7f7bf-rsjh7"] Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.086640 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-config-data\") pod \"heat-engine-7498687b57-vr4xt\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.086702 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24wpl\" (UniqueName: \"kubernetes.io/projected/979b6714-8706-4ab6-bd6e-dd127eed8347-kube-api-access-24wpl\") pod \"heat-api-864b497b7c-2zlp2\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.086733 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-config-data-custom\") pod \"heat-engine-7498687b57-vr4xt\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.086781 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-config-data\") pod \"heat-api-864b497b7c-2zlp2\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.086815 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-config-data-custom\") pod \"heat-cfnapi-5856d7f7bf-rsjh7\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.086863 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr6cs\" (UniqueName: \"kubernetes.io/projected/6e39ea81-bdee-475a-87ea-5fbd7c02759f-kube-api-access-cr6cs\") pod \"heat-cfnapi-5856d7f7bf-rsjh7\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.086942 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-config-data-custom\") pod \"heat-api-864b497b7c-2zlp2\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.086998 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-combined-ca-bundle\") pod \"heat-engine-7498687b57-vr4xt\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.087022 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-combined-ca-bundle\") pod \"heat-api-864b497b7c-2zlp2\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.087105 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2plbb\" (UniqueName: \"kubernetes.io/projected/0fbe5346-a173-4dc8-97f3-800ada75bf1b-kube-api-access-2plbb\") pod \"heat-engine-7498687b57-vr4xt\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.087135 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-config-data\") pod \"heat-cfnapi-5856d7f7bf-rsjh7\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.087186 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-combined-ca-bundle\") pod \"heat-cfnapi-5856d7f7bf-rsjh7\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.189222 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-config-data\") pod \"heat-engine-7498687b57-vr4xt\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.189283 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24wpl\" (UniqueName: \"kubernetes.io/projected/979b6714-8706-4ab6-bd6e-dd127eed8347-kube-api-access-24wpl\") pod \"heat-api-864b497b7c-2zlp2\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.189313 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-config-data-custom\") pod \"heat-engine-7498687b57-vr4xt\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.189337 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-config-data\") pod \"heat-api-864b497b7c-2zlp2\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.189368 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-config-data-custom\") pod \"heat-cfnapi-5856d7f7bf-rsjh7\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.189397 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr6cs\" (UniqueName: \"kubernetes.io/projected/6e39ea81-bdee-475a-87ea-5fbd7c02759f-kube-api-access-cr6cs\") pod \"heat-cfnapi-5856d7f7bf-rsjh7\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.189459 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-config-data-custom\") pod \"heat-api-864b497b7c-2zlp2\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.189502 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-combined-ca-bundle\") pod \"heat-engine-7498687b57-vr4xt\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.189521 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-combined-ca-bundle\") pod \"heat-api-864b497b7c-2zlp2\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.189537 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2plbb\" (UniqueName: \"kubernetes.io/projected/0fbe5346-a173-4dc8-97f3-800ada75bf1b-kube-api-access-2plbb\") pod \"heat-engine-7498687b57-vr4xt\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.189558 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-config-data\") pod \"heat-cfnapi-5856d7f7bf-rsjh7\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.189586 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-combined-ca-bundle\") pod \"heat-cfnapi-5856d7f7bf-rsjh7\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.194948 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-combined-ca-bundle\") pod \"heat-cfnapi-5856d7f7bf-rsjh7\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.199940 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-config-data-custom\") pod \"heat-engine-7498687b57-vr4xt\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.199954 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-combined-ca-bundle\") pod \"heat-engine-7498687b57-vr4xt\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.200735 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-config-data\") pod \"heat-api-864b497b7c-2zlp2\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.201533 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-config-data\") pod \"heat-engine-7498687b57-vr4xt\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.202734 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-config-data-custom\") pod \"heat-api-864b497b7c-2zlp2\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.203615 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-config-data-custom\") pod \"heat-cfnapi-5856d7f7bf-rsjh7\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.212239 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-combined-ca-bundle\") pod \"heat-api-864b497b7c-2zlp2\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.212570 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-config-data\") pod \"heat-cfnapi-5856d7f7bf-rsjh7\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.215942 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24wpl\" (UniqueName: \"kubernetes.io/projected/979b6714-8706-4ab6-bd6e-dd127eed8347-kube-api-access-24wpl\") pod \"heat-api-864b497b7c-2zlp2\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.216433 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr6cs\" (UniqueName: \"kubernetes.io/projected/6e39ea81-bdee-475a-87ea-5fbd7c02759f-kube-api-access-cr6cs\") pod \"heat-cfnapi-5856d7f7bf-rsjh7\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.225805 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2plbb\" (UniqueName: \"kubernetes.io/projected/0fbe5346-a173-4dc8-97f3-800ada75bf1b-kube-api-access-2plbb\") pod \"heat-engine-7498687b57-vr4xt\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.273604 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.321938 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:23 crc kubenswrapper[4675]: I1121 13:59:23.346381 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:24 crc kubenswrapper[4675]: I1121 13:59:24.612165 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:59:24 crc kubenswrapper[4675]: I1121 13:59:24.612713 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="597a2793-4d69-4558-906f-ee005618985e" containerName="glance-log" containerID="cri-o://8472e0b81aaeca11e557f8ee1319039a7c7651ab01f69f50f81e85bfac90e43f" gracePeriod=30 Nov 21 13:59:24 crc kubenswrapper[4675]: I1121 13:59:24.612951 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="597a2793-4d69-4558-906f-ee005618985e" containerName="glance-httpd" containerID="cri-o://b08a2793e7f8a34083efd3edfe08bcd77a57c0b062fe7d0c64bb2c77cbe6829c" gracePeriod=30 Nov 21 13:59:24 crc kubenswrapper[4675]: E1121 13:59:24.664037 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-conmon-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:59:24 crc kubenswrapper[4675]: I1121 13:59:24.694927 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 13:59:24 crc kubenswrapper[4675]: I1121 13:59:24.776752 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-8z4kx"] Nov 21 13:59:24 crc kubenswrapper[4675]: I1121 13:59:24.777009 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" podUID="18c5726b-3250-4397-bbed-a88da8daa0de" containerName="dnsmasq-dns" containerID="cri-o://0ee36b2cd00b1a6b03a0d7a2f85eb018b58cf861ba33c265ed0fe3cf79cf1552" gracePeriod=10 Nov 21 13:59:24 crc kubenswrapper[4675]: I1121 13:59:24.888040 4675 generic.go:334] "Generic (PLEG): container finished" podID="597a2793-4d69-4558-906f-ee005618985e" containerID="8472e0b81aaeca11e557f8ee1319039a7c7651ab01f69f50f81e85bfac90e43f" exitCode=143 Nov 21 13:59:24 crc kubenswrapper[4675]: I1121 13:59:24.957841 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7c9cd5548d-qsvvl"] Nov 21 13:59:24 crc kubenswrapper[4675]: I1121 13:59:24.957908 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"597a2793-4d69-4558-906f-ee005618985e","Type":"ContainerDied","Data":"8472e0b81aaeca11e557f8ee1319039a7c7651ab01f69f50f81e85bfac90e43f"} Nov 21 13:59:24 crc kubenswrapper[4675]: I1121 13:59:24.963146 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7c9cd5548d-qsvvl" podUID="55a4bbbf-724c-4ec4-95ce-0bc8395012f7" containerName="heat-api" containerID="cri-o://a37f1a1a8902fdd6c0479b5a4f9f07d860b6c277a5c08afbe2a79a259a61de49" gracePeriod=60 Nov 21 13:59:24 crc kubenswrapper[4675]: I1121 13:59:24.971447 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-69bbd8cb64-j4z85"] Nov 21 13:59:24 crc kubenswrapper[4675]: I1121 13:59:24.973157 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" podUID="6f3372bb-4733-4de4-b579-e9ede0ce2ed4" containerName="heat-cfnapi" containerID="cri-o://3ef1ba932f4e8b915160b710f98a68a1851ea8eefca5698703a54140dd2190c8" gracePeriod=60 Nov 21 13:59:24 crc kubenswrapper[4675]: I1121 13:59:24.987698 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-84586cb599-nkzm2"] Nov 21 13:59:24 crc kubenswrapper[4675]: I1121 13:59:24.994528 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.000843 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.010231 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.051233 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-f94f9658f-ptznm"] Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.052894 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.063533 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.064281 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.074333 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-84586cb599-nkzm2"] Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.116157 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-f94f9658f-ptznm"] Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.158583 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-config-data-custom\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.158685 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-config-data\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.158720 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-config-data-custom\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.158749 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-combined-ca-bundle\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.158773 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swmks\" (UniqueName: \"kubernetes.io/projected/f1015b8a-a8a3-4941-8959-2d4fd5aee749-kube-api-access-swmks\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.158810 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-config-data\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.158872 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl4c4\" (UniqueName: \"kubernetes.io/projected/3d968ae6-da72-485c-ac6d-393bbc1363da-kube-api-access-pl4c4\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.158940 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-internal-tls-certs\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.159001 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-internal-tls-certs\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.159078 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-public-tls-certs\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.159102 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-public-tls-certs\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.159153 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-combined-ca-bundle\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.264510 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl4c4\" (UniqueName: \"kubernetes.io/projected/3d968ae6-da72-485c-ac6d-393bbc1363da-kube-api-access-pl4c4\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.264573 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-internal-tls-certs\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.264620 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-internal-tls-certs\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.264664 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-public-tls-certs\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.264679 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-public-tls-certs\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.264711 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-combined-ca-bundle\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.264758 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-config-data-custom\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.264833 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-config-data\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.264865 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-config-data-custom\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.264894 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-combined-ca-bundle\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.264915 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swmks\" (UniqueName: \"kubernetes.io/projected/f1015b8a-a8a3-4941-8959-2d4fd5aee749-kube-api-access-swmks\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.264945 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-config-data\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.277758 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-config-data\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.279946 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-combined-ca-bundle\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.282604 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-internal-tls-certs\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.283096 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-internal-tls-certs\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.283755 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-combined-ca-bundle\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.284611 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-public-tls-certs\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.296402 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-config-data-custom\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.300873 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swmks\" (UniqueName: \"kubernetes.io/projected/f1015b8a-a8a3-4941-8959-2d4fd5aee749-kube-api-access-swmks\") pod \"heat-cfnapi-f94f9658f-ptznm\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.301774 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-public-tls-certs\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.302342 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-config-data\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.317059 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl4c4\" (UniqueName: \"kubernetes.io/projected/3d968ae6-da72-485c-ac6d-393bbc1363da-kube-api-access-pl4c4\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.321263 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-config-data-custom\") pod \"heat-api-84586cb599-nkzm2\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.398532 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.470602 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.901399 4675 generic.go:334] "Generic (PLEG): container finished" podID="b7c2373f-bce5-424a-9747-56897bb05444" containerID="b36d0ea5d1306c394b12fd383e8b0388fdbc8f610967e86e89064936a007db90" exitCode=137 Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.901406 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b7c2373f-bce5-424a-9747-56897bb05444","Type":"ContainerDied","Data":"b36d0ea5d1306c394b12fd383e8b0388fdbc8f610967e86e89064936a007db90"} Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.903675 4675 generic.go:334] "Generic (PLEG): container finished" podID="6f3372bb-4733-4de4-b579-e9ede0ce2ed4" containerID="3ef1ba932f4e8b915160b710f98a68a1851ea8eefca5698703a54140dd2190c8" exitCode=0 Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.903722 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" event={"ID":"6f3372bb-4733-4de4-b579-e9ede0ce2ed4","Type":"ContainerDied","Data":"3ef1ba932f4e8b915160b710f98a68a1851ea8eefca5698703a54140dd2190c8"} Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.906105 4675 generic.go:334] "Generic (PLEG): container finished" podID="18c5726b-3250-4397-bbed-a88da8daa0de" containerID="0ee36b2cd00b1a6b03a0d7a2f85eb018b58cf861ba33c265ed0fe3cf79cf1552" exitCode=0 Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.906145 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" event={"ID":"18c5726b-3250-4397-bbed-a88da8daa0de","Type":"ContainerDied","Data":"0ee36b2cd00b1a6b03a0d7a2f85eb018b58cf861ba33c265ed0fe3cf79cf1552"} Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.908087 4675 generic.go:334] "Generic (PLEG): container finished" podID="55a4bbbf-724c-4ec4-95ce-0bc8395012f7" containerID="a37f1a1a8902fdd6c0479b5a4f9f07d860b6c277a5c08afbe2a79a259a61de49" exitCode=0 Nov 21 13:59:25 crc kubenswrapper[4675]: I1121 13:59:25.908121 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c9cd5548d-qsvvl" event={"ID":"55a4bbbf-724c-4ec4-95ce-0bc8395012f7","Type":"ContainerDied","Data":"a37f1a1a8902fdd6c0479b5a4f9f07d860b6c277a5c08afbe2a79a259a61de49"} Nov 21 13:59:26 crc kubenswrapper[4675]: I1121 13:59:26.083571 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" podUID="18c5726b-3250-4397-bbed-a88da8daa0de" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.199:5353: connect: connection refused" Nov 21 13:59:26 crc kubenswrapper[4675]: I1121 13:59:26.107678 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="b7c2373f-bce5-424a-9747-56897bb05444" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.200:8776/healthcheck\": dial tcp 10.217.0.200:8776: connect: connection refused" Nov 21 13:59:26 crc kubenswrapper[4675]: I1121 13:59:26.290363 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:59:26 crc kubenswrapper[4675]: I1121 13:59:26.290605 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3f36a643-d8fe-4f5e-b618-e1e25914628c" containerName="glance-log" containerID="cri-o://a9f76b2154c6ee134725d91b1f6d072e62ab28816574c37c271fe38071471768" gracePeriod=30 Nov 21 13:59:26 crc kubenswrapper[4675]: I1121 13:59:26.290691 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3f36a643-d8fe-4f5e-b618-e1e25914628c" containerName="glance-httpd" containerID="cri-o://60cad6006b01a90cf51d23a40f106d8666a0d3eaddd388aff83fb51c8c7fd079" gracePeriod=30 Nov 21 13:59:26 crc kubenswrapper[4675]: I1121 13:59:26.927815 4675 generic.go:334] "Generic (PLEG): container finished" podID="3f36a643-d8fe-4f5e-b618-e1e25914628c" containerID="a9f76b2154c6ee134725d91b1f6d072e62ab28816574c37c271fe38071471768" exitCode=143 Nov 21 13:59:26 crc kubenswrapper[4675]: I1121 13:59:26.928012 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f36a643-d8fe-4f5e-b618-e1e25914628c","Type":"ContainerDied","Data":"a9f76b2154c6ee134725d91b1f6d072e62ab28816574c37c271fe38071471768"} Nov 21 13:59:27 crc kubenswrapper[4675]: I1121 13:59:27.943605 4675 generic.go:334] "Generic (PLEG): container finished" podID="597a2793-4d69-4558-906f-ee005618985e" containerID="b08a2793e7f8a34083efd3edfe08bcd77a57c0b062fe7d0c64bb2c77cbe6829c" exitCode=0 Nov 21 13:59:27 crc kubenswrapper[4675]: I1121 13:59:27.943648 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"597a2793-4d69-4558-906f-ee005618985e","Type":"ContainerDied","Data":"b08a2793e7f8a34083efd3edfe08bcd77a57c0b062fe7d0c64bb2c77cbe6829c"} Nov 21 13:59:28 crc kubenswrapper[4675]: E1121 13:59:28.063992 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-conmon-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:59:28 crc kubenswrapper[4675]: I1121 13:59:28.850184 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 13:59:28 crc kubenswrapper[4675]: E1121 13:59:28.850927 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.037920 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.179398 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-config\") pod \"18c5726b-3250-4397-bbed-a88da8daa0de\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.179458 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-dns-svc\") pod \"18c5726b-3250-4397-bbed-a88da8daa0de\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.179496 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdfjm\" (UniqueName: \"kubernetes.io/projected/18c5726b-3250-4397-bbed-a88da8daa0de-kube-api-access-fdfjm\") pod \"18c5726b-3250-4397-bbed-a88da8daa0de\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.179551 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-dns-swift-storage-0\") pod \"18c5726b-3250-4397-bbed-a88da8daa0de\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.179591 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-ovsdbserver-sb\") pod \"18c5726b-3250-4397-bbed-a88da8daa0de\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.179680 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-ovsdbserver-nb\") pod \"18c5726b-3250-4397-bbed-a88da8daa0de\" (UID: \"18c5726b-3250-4397-bbed-a88da8daa0de\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.196837 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c5726b-3250-4397-bbed-a88da8daa0de-kube-api-access-fdfjm" (OuterVolumeSpecName: "kube-api-access-fdfjm") pod "18c5726b-3250-4397-bbed-a88da8daa0de" (UID: "18c5726b-3250-4397-bbed-a88da8daa0de"). InnerVolumeSpecName "kube-api-access-fdfjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.283562 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdfjm\" (UniqueName: \"kubernetes.io/projected/18c5726b-3250-4397-bbed-a88da8daa0de-kube-api-access-fdfjm\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.333832 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-config" (OuterVolumeSpecName: "config") pod "18c5726b-3250-4397-bbed-a88da8daa0de" (UID: "18c5726b-3250-4397-bbed-a88da8daa0de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.334013 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "18c5726b-3250-4397-bbed-a88da8daa0de" (UID: "18c5726b-3250-4397-bbed-a88da8daa0de"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.334977 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "18c5726b-3250-4397-bbed-a88da8daa0de" (UID: "18c5726b-3250-4397-bbed-a88da8daa0de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.338450 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "18c5726b-3250-4397-bbed-a88da8daa0de" (UID: "18c5726b-3250-4397-bbed-a88da8daa0de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.349640 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "18c5726b-3250-4397-bbed-a88da8daa0de" (UID: "18c5726b-3250-4397-bbed-a88da8daa0de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.386369 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.386407 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.386419 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.386429 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.386442 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c5726b-3250-4397-bbed-a88da8daa0de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.733530 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-wnqhq"] Nov 21 13:59:29 crc kubenswrapper[4675]: E1121 13:59:29.734606 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c5726b-3250-4397-bbed-a88da8daa0de" containerName="init" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.734631 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c5726b-3250-4397-bbed-a88da8daa0de" containerName="init" Nov 21 13:59:29 crc kubenswrapper[4675]: E1121 13:59:29.734661 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c5726b-3250-4397-bbed-a88da8daa0de" containerName="dnsmasq-dns" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.734670 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c5726b-3250-4397-bbed-a88da8daa0de" containerName="dnsmasq-dns" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.735554 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c5726b-3250-4397-bbed-a88da8daa0de" containerName="dnsmasq-dns" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.736915 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wnqhq" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.754109 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wnqhq"] Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.804938 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-8vslv"] Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.805416 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.805986 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="597a2793-4d69-4558-906f-ee005618985e" containerName="glance-log" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.806016 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="597a2793-4d69-4558-906f-ee005618985e" containerName="glance-httpd" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.807235 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8vslv" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.817586 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8vslv"] Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.817724 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.828144 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.847627 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.881783 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.911267 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-combined-ca-bundle\") pod \"597a2793-4d69-4558-906f-ee005618985e\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.911370 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-config-data\") pod \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.911433 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7c2373f-bce5-424a-9747-56897bb05444-etc-machine-id\") pod \"b7c2373f-bce5-424a-9747-56897bb05444\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.911493 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7c2373f-bce5-424a-9747-56897bb05444-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b7c2373f-bce5-424a-9747-56897bb05444" (UID: "b7c2373f-bce5-424a-9747-56897bb05444"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.911945 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-public-tls-certs\") pod \"597a2793-4d69-4558-906f-ee005618985e\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.912051 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwc65\" (UniqueName: \"kubernetes.io/projected/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-kube-api-access-gwc65\") pod \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.912167 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-combined-ca-bundle\") pod \"b7c2373f-bce5-424a-9747-56897bb05444\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.912258 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-scripts\") pod \"b7c2373f-bce5-424a-9747-56897bb05444\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.912462 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-config-data\") pod \"597a2793-4d69-4558-906f-ee005618985e\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.913989 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-config-data-custom\") pod \"b7c2373f-bce5-424a-9747-56897bb05444\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.914104 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/597a2793-4d69-4558-906f-ee005618985e-httpd-run\") pod \"597a2793-4d69-4558-906f-ee005618985e\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.914197 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/597a2793-4d69-4558-906f-ee005618985e-logs\") pod \"597a2793-4d69-4558-906f-ee005618985e\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.914290 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vxkd\" (UniqueName: \"kubernetes.io/projected/b7c2373f-bce5-424a-9747-56897bb05444-kube-api-access-7vxkd\") pod \"b7c2373f-bce5-424a-9747-56897bb05444\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.914399 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-combined-ca-bundle\") pod \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.914506 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"597a2793-4d69-4558-906f-ee005618985e\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.914587 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-config-data\") pod \"b7c2373f-bce5-424a-9747-56897bb05444\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.914654 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-scripts\") pod \"597a2793-4d69-4558-906f-ee005618985e\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.914740 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-config-data-custom\") pod \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\" (UID: \"6f3372bb-4733-4de4-b579-e9ede0ce2ed4\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.914811 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z669l\" (UniqueName: \"kubernetes.io/projected/597a2793-4d69-4558-906f-ee005618985e-kube-api-access-z669l\") pod \"597a2793-4d69-4558-906f-ee005618985e\" (UID: \"597a2793-4d69-4558-906f-ee005618985e\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.914946 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7c2373f-bce5-424a-9747-56897bb05444-logs\") pod \"b7c2373f-bce5-424a-9747-56897bb05444\" (UID: \"b7c2373f-bce5-424a-9747-56897bb05444\") " Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.915666 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7c2373f-bce5-424a-9747-56897bb05444-logs" (OuterVolumeSpecName: "logs") pod "b7c2373f-bce5-424a-9747-56897bb05444" (UID: "b7c2373f-bce5-424a-9747-56897bb05444"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.918780 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ht78\" (UniqueName: \"kubernetes.io/projected/028aa8ea-8e30-4ec4-8280-59935c9cf343-kube-api-access-8ht78\") pod \"nova-cell0-db-create-8vslv\" (UID: \"028aa8ea-8e30-4ec4-8280-59935c9cf343\") " pod="openstack/nova-cell0-db-create-8vslv" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.918985 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/028aa8ea-8e30-4ec4-8280-59935c9cf343-operator-scripts\") pod \"nova-cell0-db-create-8vslv\" (UID: \"028aa8ea-8e30-4ec4-8280-59935c9cf343\") " pod="openstack/nova-cell0-db-create-8vslv" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.919498 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68ae1285-2384-4da2-803f-9395625e88de-operator-scripts\") pod \"nova-api-db-create-wnqhq\" (UID: \"68ae1285-2384-4da2-803f-9395625e88de\") " pod="openstack/nova-api-db-create-wnqhq" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.919728 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-588hb\" (UniqueName: \"kubernetes.io/projected/68ae1285-2384-4da2-803f-9395625e88de-kube-api-access-588hb\") pod \"nova-api-db-create-wnqhq\" (UID: \"68ae1285-2384-4da2-803f-9395625e88de\") " pod="openstack/nova-api-db-create-wnqhq" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.922664 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/597a2793-4d69-4558-906f-ee005618985e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "597a2793-4d69-4558-906f-ee005618985e" (UID: "597a2793-4d69-4558-906f-ee005618985e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.923196 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/597a2793-4d69-4558-906f-ee005618985e-logs" (OuterVolumeSpecName: "logs") pod "597a2793-4d69-4558-906f-ee005618985e" (UID: "597a2793-4d69-4558-906f-ee005618985e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.927904 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-scripts" (OuterVolumeSpecName: "scripts") pod "b7c2373f-bce5-424a-9747-56897bb05444" (UID: "b7c2373f-bce5-424a-9747-56897bb05444"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.931948 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-scripts" (OuterVolumeSpecName: "scripts") pod "597a2793-4d69-4558-906f-ee005618985e" (UID: "597a2793-4d69-4558-906f-ee005618985e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.938115 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "597a2793-4d69-4558-906f-ee005618985e" (UID: "597a2793-4d69-4558-906f-ee005618985e"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.942232 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7c2373f-bce5-424a-9747-56897bb05444-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.942333 4675 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7c2373f-bce5-424a-9747-56897bb05444-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.943302 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-kube-api-access-gwc65" (OuterVolumeSpecName: "kube-api-access-gwc65") pod "6f3372bb-4733-4de4-b579-e9ede0ce2ed4" (UID: "6f3372bb-4733-4de4-b579-e9ede0ce2ed4"). InnerVolumeSpecName "kube-api-access-gwc65". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.961277 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b7c2373f-bce5-424a-9747-56897bb05444" (UID: "b7c2373f-bce5-424a-9747-56897bb05444"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.961510 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/597a2793-4d69-4558-906f-ee005618985e-kube-api-access-z669l" (OuterVolumeSpecName: "kube-api-access-z669l") pod "597a2793-4d69-4558-906f-ee005618985e" (UID: "597a2793-4d69-4558-906f-ee005618985e"). InnerVolumeSpecName "kube-api-access-z669l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.962319 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c2373f-bce5-424a-9747-56897bb05444-kube-api-access-7vxkd" (OuterVolumeSpecName: "kube-api-access-7vxkd") pod "b7c2373f-bce5-424a-9747-56897bb05444" (UID: "b7c2373f-bce5-424a-9747-56897bb05444"). InnerVolumeSpecName "kube-api-access-7vxkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:29 crc kubenswrapper[4675]: I1121 13:59:29.968030 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6f3372bb-4733-4de4-b579-e9ede0ce2ed4" (UID: "6f3372bb-4733-4de4-b579-e9ede0ce2ed4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.007393 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f7f0-account-create-9jvch"] Nov 21 13:59:30 crc kubenswrapper[4675]: E1121 13:59:30.008033 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597a2793-4d69-4558-906f-ee005618985e" containerName="glance-httpd" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008050 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="597a2793-4d69-4558-906f-ee005618985e" containerName="glance-httpd" Nov 21 13:59:30 crc kubenswrapper[4675]: E1121 13:59:30.008097 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="proxy-httpd" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008108 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="proxy-httpd" Nov 21 13:59:30 crc kubenswrapper[4675]: E1121 13:59:30.008128 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c2373f-bce5-424a-9747-56897bb05444" containerName="cinder-api" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008137 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c2373f-bce5-424a-9747-56897bb05444" containerName="cinder-api" Nov 21 13:59:30 crc kubenswrapper[4675]: E1121 13:59:30.008151 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a4bbbf-724c-4ec4-95ce-0bc8395012f7" containerName="heat-api" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008159 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a4bbbf-724c-4ec4-95ce-0bc8395012f7" containerName="heat-api" Nov 21 13:59:30 crc kubenswrapper[4675]: E1121 13:59:30.008174 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="ceilometer-notification-agent" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008182 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="ceilometer-notification-agent" Nov 21 13:59:30 crc kubenswrapper[4675]: E1121 13:59:30.008191 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597a2793-4d69-4558-906f-ee005618985e" containerName="glance-log" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008198 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="597a2793-4d69-4558-906f-ee005618985e" containerName="glance-log" Nov 21 13:59:30 crc kubenswrapper[4675]: E1121 13:59:30.008225 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="sg-core" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008233 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="sg-core" Nov 21 13:59:30 crc kubenswrapper[4675]: E1121 13:59:30.008250 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3372bb-4733-4de4-b579-e9ede0ce2ed4" containerName="heat-cfnapi" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008258 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3372bb-4733-4de4-b579-e9ede0ce2ed4" containerName="heat-cfnapi" Nov 21 13:59:30 crc kubenswrapper[4675]: E1121 13:59:30.008279 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="ceilometer-central-agent" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008286 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="ceilometer-central-agent" Nov 21 13:59:30 crc kubenswrapper[4675]: E1121 13:59:30.008312 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c2373f-bce5-424a-9747-56897bb05444" containerName="cinder-api-log" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008319 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c2373f-bce5-424a-9747-56897bb05444" containerName="cinder-api-log" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008569 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c2373f-bce5-424a-9747-56897bb05444" containerName="cinder-api-log" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008631 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a4bbbf-724c-4ec4-95ce-0bc8395012f7" containerName="heat-api" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008641 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="ceilometer-central-agent" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008656 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="ceilometer-notification-agent" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008673 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="proxy-httpd" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008682 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c2373f-bce5-424a-9747-56897bb05444" containerName="cinder-api" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008703 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f3372bb-4733-4de4-b579-e9ede0ce2ed4" containerName="heat-cfnapi" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.008832 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" containerName="sg-core" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.009896 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f7f0-account-create-9jvch" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.015502 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.043968 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-combined-ca-bundle\") pod \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.044025 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-scripts\") pod \"1a9c792a-f9a3-416b-b131-ac61338200da\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.044193 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvxmw\" (UniqueName: \"kubernetes.io/projected/1a9c792a-f9a3-416b-b131-ac61338200da-kube-api-access-xvxmw\") pod \"1a9c792a-f9a3-416b-b131-ac61338200da\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.044343 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-combined-ca-bundle\") pod \"1a9c792a-f9a3-416b-b131-ac61338200da\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.044417 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf5pn\" (UniqueName: \"kubernetes.io/projected/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-kube-api-access-qf5pn\") pod \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.044494 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a9c792a-f9a3-416b-b131-ac61338200da-log-httpd\") pod \"1a9c792a-f9a3-416b-b131-ac61338200da\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.044540 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-config-data-custom\") pod \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.044567 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-sg-core-conf-yaml\") pod \"1a9c792a-f9a3-416b-b131-ac61338200da\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.044646 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a9c792a-f9a3-416b-b131-ac61338200da-run-httpd\") pod \"1a9c792a-f9a3-416b-b131-ac61338200da\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.044672 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-config-data\") pod \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\" (UID: \"55a4bbbf-724c-4ec4-95ce-0bc8395012f7\") " Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.044709 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-config-data\") pod \"1a9c792a-f9a3-416b-b131-ac61338200da\" (UID: \"1a9c792a-f9a3-416b-b131-ac61338200da\") " Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.045139 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-588hb\" (UniqueName: \"kubernetes.io/projected/68ae1285-2384-4da2-803f-9395625e88de-kube-api-access-588hb\") pod \"nova-api-db-create-wnqhq\" (UID: \"68ae1285-2384-4da2-803f-9395625e88de\") " pod="openstack/nova-api-db-create-wnqhq" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.045474 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ht78\" (UniqueName: \"kubernetes.io/projected/028aa8ea-8e30-4ec4-8280-59935c9cf343-kube-api-access-8ht78\") pod \"nova-cell0-db-create-8vslv\" (UID: \"028aa8ea-8e30-4ec4-8280-59935c9cf343\") " pod="openstack/nova-cell0-db-create-8vslv" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.045588 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/028aa8ea-8e30-4ec4-8280-59935c9cf343-operator-scripts\") pod \"nova-cell0-db-create-8vslv\" (UID: \"028aa8ea-8e30-4ec4-8280-59935c9cf343\") " pod="openstack/nova-cell0-db-create-8vslv" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.045632 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68ae1285-2384-4da2-803f-9395625e88de-operator-scripts\") pod \"nova-api-db-create-wnqhq\" (UID: \"68ae1285-2384-4da2-803f-9395625e88de\") " pod="openstack/nova-api-db-create-wnqhq" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.046096 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z669l\" (UniqueName: \"kubernetes.io/projected/597a2793-4d69-4558-906f-ee005618985e-kube-api-access-z669l\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.046115 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwc65\" (UniqueName: \"kubernetes.io/projected/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-kube-api-access-gwc65\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.046127 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.046139 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.046150 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/597a2793-4d69-4558-906f-ee005618985e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.046161 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/597a2793-4d69-4558-906f-ee005618985e-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.046173 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vxkd\" (UniqueName: \"kubernetes.io/projected/b7c2373f-bce5-424a-9747-56897bb05444-kube-api-access-7vxkd\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.046197 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.046209 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.046222 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.049439 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9c792a-f9a3-416b-b131-ac61338200da-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1a9c792a-f9a3-416b-b131-ac61338200da" (UID: "1a9c792a-f9a3-416b-b131-ac61338200da"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.050927 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9c792a-f9a3-416b-b131-ac61338200da-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1a9c792a-f9a3-416b-b131-ac61338200da" (UID: "1a9c792a-f9a3-416b-b131-ac61338200da"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.057753 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/028aa8ea-8e30-4ec4-8280-59935c9cf343-operator-scripts\") pod \"nova-cell0-db-create-8vslv\" (UID: \"028aa8ea-8e30-4ec4-8280-59935c9cf343\") " pod="openstack/nova-cell0-db-create-8vslv" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.058288 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f7f0-account-create-9jvch"] Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.058729 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68ae1285-2384-4da2-803f-9395625e88de-operator-scripts\") pod \"nova-api-db-create-wnqhq\" (UID: \"68ae1285-2384-4da2-803f-9395625e88de\") " pod="openstack/nova-api-db-create-wnqhq" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.058892 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3b6ec2a5-ea89-459f-b66c-4822e68f1498","Type":"ContainerStarted","Data":"ea16b8a03285d1b9fdd8671c53777bce8f48ea2c9becf9def59b40fab2424487"} Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.079096 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" event={"ID":"6f3372bb-4733-4de4-b579-e9ede0ce2ed4","Type":"ContainerDied","Data":"a1d5266e15e3292a64f75215fc58645808065cec909ed9c28ed3b510d8b8cb38"} Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.079174 4675 scope.go:117] "RemoveContainer" containerID="3ef1ba932f4e8b915160b710f98a68a1851ea8eefca5698703a54140dd2190c8" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.079424 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.086652 4675 generic.go:334] "Generic (PLEG): container finished" podID="3f36a643-d8fe-4f5e-b618-e1e25914628c" containerID="60cad6006b01a90cf51d23a40f106d8666a0d3eaddd388aff83fb51c8c7fd079" exitCode=0 Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.086719 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f36a643-d8fe-4f5e-b618-e1e25914628c","Type":"ContainerDied","Data":"60cad6006b01a90cf51d23a40f106d8666a0d3eaddd388aff83fb51c8c7fd079"} Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.099420 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.099443 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-8z4kx" event={"ID":"18c5726b-3250-4397-bbed-a88da8daa0de","Type":"ContainerDied","Data":"5b7f19181fdb25275cd4532615ffc6a5c129b114a5f36997a51dcd1de26e84c0"} Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.101579 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c9cd5548d-qsvvl" event={"ID":"55a4bbbf-724c-4ec4-95ce-0bc8395012f7","Type":"ContainerDied","Data":"9d8e3e4c4ac4a51cfb5af357397bb80d3d34cbd039f2efb10d098f41343ef852"} Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.102094 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c9cd5548d-qsvvl" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.110877 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.111293 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b7c2373f-bce5-424a-9747-56897bb05444","Type":"ContainerDied","Data":"0378a49471878a214b15b3fde10f8f6cfdfbd7e850257e82bbab4d32755f6a3f"} Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.116411 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a9c792a-f9a3-416b-b131-ac61338200da","Type":"ContainerDied","Data":"b47377ae6a41f1382543cd740d1cc0fea5bdb9c47b8cd2ca05fdca27c5852cea"} Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.116642 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.119811 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"597a2793-4d69-4558-906f-ee005618985e","Type":"ContainerDied","Data":"b519fb097b6e27977981bac5fb31b4d890802f6efdc9e48deb9f6689aa7aa5c8"} Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.120153 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.136020 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-lxtx8"] Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.138980 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lxtx8" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.157143 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhkvt\" (UniqueName: \"kubernetes.io/projected/91ece073-0d12-40a6-a6c6-8f40cbc5268f-kube-api-access-lhkvt\") pod \"nova-api-f7f0-account-create-9jvch\" (UID: \"91ece073-0d12-40a6-a6c6-8f40cbc5268f\") " pod="openstack/nova-api-f7f0-account-create-9jvch" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.157280 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91ece073-0d12-40a6-a6c6-8f40cbc5268f-operator-scripts\") pod \"nova-api-f7f0-account-create-9jvch\" (UID: \"91ece073-0d12-40a6-a6c6-8f40cbc5268f\") " pod="openstack/nova-api-f7f0-account-create-9jvch" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.157775 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a9c792a-f9a3-416b-b131-ac61338200da-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.157933 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a9c792a-f9a3-416b-b131-ac61338200da-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.160377 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-kube-api-access-qf5pn" (OuterVolumeSpecName: "kube-api-access-qf5pn") pod "55a4bbbf-724c-4ec4-95ce-0bc8395012f7" (UID: "55a4bbbf-724c-4ec4-95ce-0bc8395012f7"). InnerVolumeSpecName "kube-api-access-qf5pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.164336 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "55a4bbbf-724c-4ec4-95ce-0bc8395012f7" (UID: "55a4bbbf-724c-4ec4-95ce-0bc8395012f7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.164704 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-scripts" (OuterVolumeSpecName: "scripts") pod "1a9c792a-f9a3-416b-b131-ac61338200da" (UID: "1a9c792a-f9a3-416b-b131-ac61338200da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.166870 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lxtx8"] Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.172039 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9c792a-f9a3-416b-b131-ac61338200da-kube-api-access-xvxmw" (OuterVolumeSpecName: "kube-api-access-xvxmw") pod "1a9c792a-f9a3-416b-b131-ac61338200da" (UID: "1a9c792a-f9a3-416b-b131-ac61338200da"). InnerVolumeSpecName "kube-api-access-xvxmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.172309 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-588hb\" (UniqueName: \"kubernetes.io/projected/68ae1285-2384-4da2-803f-9395625e88de-kube-api-access-588hb\") pod \"nova-api-db-create-wnqhq\" (UID: \"68ae1285-2384-4da2-803f-9395625e88de\") " pod="openstack/nova-api-db-create-wnqhq" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.179569 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ht78\" (UniqueName: \"kubernetes.io/projected/028aa8ea-8e30-4ec4-8280-59935c9cf343-kube-api-access-8ht78\") pod \"nova-cell0-db-create-8vslv\" (UID: \"028aa8ea-8e30-4ec4-8280-59935c9cf343\") " pod="openstack/nova-cell0-db-create-8vslv" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.189238 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-a3b5-account-create-d6mhd"] Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.190669 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.33785279 podStartE2EDuration="20.190652133s" podCreationTimestamp="2025-11-21 13:59:10 +0000 UTC" firstStartedPulling="2025-11-21 13:59:12.01154369 +0000 UTC m=+1628.737958427" lastFinishedPulling="2025-11-21 13:59:28.864343043 +0000 UTC m=+1645.590757770" observedRunningTime="2025-11-21 13:59:30.142739076 +0000 UTC m=+1646.869153803" watchObservedRunningTime="2025-11-21 13:59:30.190652133 +0000 UTC m=+1646.917066860" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.191258 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a3b5-account-create-d6mhd" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.193683 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.198319 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "597a2793-4d69-4558-906f-ee005618985e" (UID: "597a2793-4d69-4558-906f-ee005618985e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.208679 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a3b5-account-create-d6mhd"] Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.212959 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wnqhq" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.221232 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8vslv" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.265570 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhkvt\" (UniqueName: \"kubernetes.io/projected/91ece073-0d12-40a6-a6c6-8f40cbc5268f-kube-api-access-lhkvt\") pod \"nova-api-f7f0-account-create-9jvch\" (UID: \"91ece073-0d12-40a6-a6c6-8f40cbc5268f\") " pod="openstack/nova-api-f7f0-account-create-9jvch" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.265613 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.265648 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtkk8\" (UniqueName: \"kubernetes.io/projected/da354635-a1e7-4632-90d1-7d0cc2dded63-kube-api-access-xtkk8\") pod \"nova-cell1-db-create-lxtx8\" (UID: \"da354635-a1e7-4632-90d1-7d0cc2dded63\") " pod="openstack/nova-cell1-db-create-lxtx8" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.265685 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91ece073-0d12-40a6-a6c6-8f40cbc5268f-operator-scripts\") pod \"nova-api-f7f0-account-create-9jvch\" (UID: \"91ece073-0d12-40a6-a6c6-8f40cbc5268f\") " pod="openstack/nova-api-f7f0-account-create-9jvch" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.265707 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da354635-a1e7-4632-90d1-7d0cc2dded63-operator-scripts\") pod \"nova-cell1-db-create-lxtx8\" (UID: \"da354635-a1e7-4632-90d1-7d0cc2dded63\") " pod="openstack/nova-cell1-db-create-lxtx8" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.266296 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf5pn\" (UniqueName: \"kubernetes.io/projected/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-kube-api-access-qf5pn\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.266317 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.266329 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.266340 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.266351 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvxmw\" (UniqueName: \"kubernetes.io/projected/1a9c792a-f9a3-416b-b131-ac61338200da-kube-api-access-xvxmw\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.266363 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.266966 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91ece073-0d12-40a6-a6c6-8f40cbc5268f-operator-scripts\") pod \"nova-api-f7f0-account-create-9jvch\" (UID: \"91ece073-0d12-40a6-a6c6-8f40cbc5268f\") " pod="openstack/nova-api-f7f0-account-create-9jvch" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.273505 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7c2373f-bce5-424a-9747-56897bb05444" (UID: "b7c2373f-bce5-424a-9747-56897bb05444"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.346518 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhkvt\" (UniqueName: \"kubernetes.io/projected/91ece073-0d12-40a6-a6c6-8f40cbc5268f-kube-api-access-lhkvt\") pod \"nova-api-f7f0-account-create-9jvch\" (UID: \"91ece073-0d12-40a6-a6c6-8f40cbc5268f\") " pod="openstack/nova-api-f7f0-account-create-9jvch" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.399291 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4lbz\" (UniqueName: \"kubernetes.io/projected/f8bf22f5-333f-43c5-9666-86ffa5657944-kube-api-access-l4lbz\") pod \"nova-cell0-a3b5-account-create-d6mhd\" (UID: \"f8bf22f5-333f-43c5-9666-86ffa5657944\") " pod="openstack/nova-cell0-a3b5-account-create-d6mhd" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.400420 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtkk8\" (UniqueName: \"kubernetes.io/projected/da354635-a1e7-4632-90d1-7d0cc2dded63-kube-api-access-xtkk8\") pod \"nova-cell1-db-create-lxtx8\" (UID: \"da354635-a1e7-4632-90d1-7d0cc2dded63\") " pod="openstack/nova-cell1-db-create-lxtx8" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.400464 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da354635-a1e7-4632-90d1-7d0cc2dded63-operator-scripts\") pod \"nova-cell1-db-create-lxtx8\" (UID: \"da354635-a1e7-4632-90d1-7d0cc2dded63\") " pod="openstack/nova-cell1-db-create-lxtx8" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.400671 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8bf22f5-333f-43c5-9666-86ffa5657944-operator-scripts\") pod \"nova-cell0-a3b5-account-create-d6mhd\" (UID: \"f8bf22f5-333f-43c5-9666-86ffa5657944\") " pod="openstack/nova-cell0-a3b5-account-create-d6mhd" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.400874 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.401671 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da354635-a1e7-4632-90d1-7d0cc2dded63-operator-scripts\") pod \"nova-cell1-db-create-lxtx8\" (UID: \"da354635-a1e7-4632-90d1-7d0cc2dded63\") " pod="openstack/nova-cell1-db-create-lxtx8" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.406181 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-config-data" (OuterVolumeSpecName: "config-data") pod "b7c2373f-bce5-424a-9747-56897bb05444" (UID: "b7c2373f-bce5-424a-9747-56897bb05444"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.408883 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f3372bb-4733-4de4-b579-e9ede0ce2ed4" (UID: "6f3372bb-4733-4de4-b579-e9ede0ce2ed4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.409153 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8e46-account-create-956rv"] Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.414113 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8e46-account-create-956rv" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.417623 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.430637 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8e46-account-create-956rv"] Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.456227 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55a4bbbf-724c-4ec4-95ce-0bc8395012f7" (UID: "55a4bbbf-724c-4ec4-95ce-0bc8395012f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.461578 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtkk8\" (UniqueName: \"kubernetes.io/projected/da354635-a1e7-4632-90d1-7d0cc2dded63-kube-api-access-xtkk8\") pod \"nova-cell1-db-create-lxtx8\" (UID: \"da354635-a1e7-4632-90d1-7d0cc2dded63\") " pod="openstack/nova-cell1-db-create-lxtx8" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.483797 4675 scope.go:117] "RemoveContainer" containerID="0ee36b2cd00b1a6b03a0d7a2f85eb018b58cf861ba33c265ed0fe3cf79cf1552" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.497732 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-config-data" (OuterVolumeSpecName: "config-data") pod "597a2793-4d69-4558-906f-ee005618985e" (UID: "597a2793-4d69-4558-906f-ee005618985e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.505247 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8bf22f5-333f-43c5-9666-86ffa5657944-operator-scripts\") pod \"nova-cell0-a3b5-account-create-d6mhd\" (UID: \"f8bf22f5-333f-43c5-9666-86ffa5657944\") " pod="openstack/nova-cell0-a3b5-account-create-d6mhd" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.505369 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw6vk\" (UniqueName: \"kubernetes.io/projected/21f160c6-8942-4e0a-bf07-6c57e7d69175-kube-api-access-kw6vk\") pod \"nova-cell1-8e46-account-create-956rv\" (UID: \"21f160c6-8942-4e0a-bf07-6c57e7d69175\") " pod="openstack/nova-cell1-8e46-account-create-956rv" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.505559 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f160c6-8942-4e0a-bf07-6c57e7d69175-operator-scripts\") pod \"nova-cell1-8e46-account-create-956rv\" (UID: \"21f160c6-8942-4e0a-bf07-6c57e7d69175\") " pod="openstack/nova-cell1-8e46-account-create-956rv" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.505600 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4lbz\" (UniqueName: \"kubernetes.io/projected/f8bf22f5-333f-43c5-9666-86ffa5657944-kube-api-access-l4lbz\") pod \"nova-cell0-a3b5-account-create-d6mhd\" (UID: \"f8bf22f5-333f-43c5-9666-86ffa5657944\") " pod="openstack/nova-cell0-a3b5-account-create-d6mhd" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.505828 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.505840 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.505852 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c2373f-bce5-424a-9747-56897bb05444-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.505860 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.507051 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8bf22f5-333f-43c5-9666-86ffa5657944-operator-scripts\") pod \"nova-cell0-a3b5-account-create-d6mhd\" (UID: \"f8bf22f5-333f-43c5-9666-86ffa5657944\") " pod="openstack/nova-cell0-a3b5-account-create-d6mhd" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.536899 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-8z4kx"] Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.538322 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4lbz\" (UniqueName: \"kubernetes.io/projected/f8bf22f5-333f-43c5-9666-86ffa5657944-kube-api-access-l4lbz\") pod \"nova-cell0-a3b5-account-create-d6mhd\" (UID: \"f8bf22f5-333f-43c5-9666-86ffa5657944\") " pod="openstack/nova-cell0-a3b5-account-create-d6mhd" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.538328 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1a9c792a-f9a3-416b-b131-ac61338200da" (UID: "1a9c792a-f9a3-416b-b131-ac61338200da"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.548319 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-config-data" (OuterVolumeSpecName: "config-data") pod "55a4bbbf-724c-4ec4-95ce-0bc8395012f7" (UID: "55a4bbbf-724c-4ec4-95ce-0bc8395012f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.560365 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "597a2793-4d69-4558-906f-ee005618985e" (UID: "597a2793-4d69-4558-906f-ee005618985e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.594709 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-8z4kx"] Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.600081 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-config-data" (OuterVolumeSpecName: "config-data") pod "6f3372bb-4733-4de4-b579-e9ede0ce2ed4" (UID: "6f3372bb-4733-4de4-b579-e9ede0ce2ed4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.613323 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw6vk\" (UniqueName: \"kubernetes.io/projected/21f160c6-8942-4e0a-bf07-6c57e7d69175-kube-api-access-kw6vk\") pod \"nova-cell1-8e46-account-create-956rv\" (UID: \"21f160c6-8942-4e0a-bf07-6c57e7d69175\") " pod="openstack/nova-cell1-8e46-account-create-956rv" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.613526 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f160c6-8942-4e0a-bf07-6c57e7d69175-operator-scripts\") pod \"nova-cell1-8e46-account-create-956rv\" (UID: \"21f160c6-8942-4e0a-bf07-6c57e7d69175\") " pod="openstack/nova-cell1-8e46-account-create-956rv" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.613665 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.613687 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55a4bbbf-724c-4ec4-95ce-0bc8395012f7-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.613700 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3372bb-4733-4de4-b579-e9ede0ce2ed4-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.613711 4675 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/597a2793-4d69-4558-906f-ee005618985e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.614339 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.623137 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f160c6-8942-4e0a-bf07-6c57e7d69175-operator-scripts\") pod \"nova-cell1-8e46-account-create-956rv\" (UID: \"21f160c6-8942-4e0a-bf07-6c57e7d69175\") " pod="openstack/nova-cell1-8e46-account-create-956rv" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.630349 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.636150 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw6vk\" (UniqueName: \"kubernetes.io/projected/21f160c6-8942-4e0a-bf07-6c57e7d69175-kube-api-access-kw6vk\") pod \"nova-cell1-8e46-account-create-956rv\" (UID: \"21f160c6-8942-4e0a-bf07-6c57e7d69175\") " pod="openstack/nova-cell1-8e46-account-create-956rv" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.653378 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.655299 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.660153 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.660662 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.661103 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.676870 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.713905 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a9c792a-f9a3-416b-b131-ac61338200da" (UID: "1a9c792a-f9a3-416b-b131-ac61338200da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.715634 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.747662 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-config-data" (OuterVolumeSpecName: "config-data") pod "1a9c792a-f9a3-416b-b131-ac61338200da" (UID: "1a9c792a-f9a3-416b-b131-ac61338200da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.817569 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-scripts\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.817612 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-772l7\" (UniqueName: \"kubernetes.io/projected/64cafc2c-04de-4090-9026-2b986fcae86a-kube-api-access-772l7\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.817646 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-config-data-custom\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.817705 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64cafc2c-04de-4090-9026-2b986fcae86a-logs\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.817726 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.817751 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-config-data\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.817800 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64cafc2c-04de-4090-9026-2b986fcae86a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.817860 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.817879 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.817942 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a9c792a-f9a3-416b-b131-ac61338200da-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.891202 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lxtx8" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.892273 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f7f0-account-create-9jvch" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.897934 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a3b5-account-create-d6mhd" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.900504 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c5726b-3250-4397-bbed-a88da8daa0de" path="/var/lib/kubelet/pods/18c5726b-3250-4397-bbed-a88da8daa0de/volumes" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.916443 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c2373f-bce5-424a-9747-56897bb05444" path="/var/lib/kubelet/pods/b7c2373f-bce5-424a-9747-56897bb05444/volumes" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.917643 4675 scope.go:117] "RemoveContainer" containerID="99c478097479c76e6a304b82fea4b01c4a3bad183ba8b5dbd379d3c67c4c3ebc" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.919247 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.919330 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-scripts\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.919358 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-772l7\" (UniqueName: \"kubernetes.io/projected/64cafc2c-04de-4090-9026-2b986fcae86a-kube-api-access-772l7\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.919391 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-config-data-custom\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.919455 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64cafc2c-04de-4090-9026-2b986fcae86a-logs\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.919474 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.919499 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-config-data\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.919554 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64cafc2c-04de-4090-9026-2b986fcae86a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.919614 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.921276 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64cafc2c-04de-4090-9026-2b986fcae86a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.922651 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64cafc2c-04de-4090-9026-2b986fcae86a-logs\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.936137 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.936204 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-config-data-custom\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.936414 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-scripts\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.936849 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.941511 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.943626 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8e46-account-create-956rv" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.951941 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-772l7\" (UniqueName: \"kubernetes.io/projected/64cafc2c-04de-4090-9026-2b986fcae86a-kube-api-access-772l7\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.953182 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64cafc2c-04de-4090-9026-2b986fcae86a-config-data\") pod \"cinder-api-0\" (UID: \"64cafc2c-04de-4090-9026-2b986fcae86a\") " pod="openstack/cinder-api-0" Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.974343 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-69bbd8cb64-j4z85"] Nov 21 13:59:30 crc kubenswrapper[4675]: I1121 13:59:30.982394 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:30.998245 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-69bbd8cb64-j4z85"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.057052 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-84586cb599-nkzm2"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.062810 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: W1121 13:59:31.091534 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod979b6714_8706_4ab6_bd6e_dd127eed8347.slice/crio-4a6590bf441faa059f7c84af3f57775400dcbd795fff706539a67948ab9c0ac5 WatchSource:0}: Error finding container 4a6590bf441faa059f7c84af3f57775400dcbd795fff706539a67948ab9c0ac5: Status 404 returned error can't find the container with id 4a6590bf441faa059f7c84af3f57775400dcbd795fff706539a67948ab9c0ac5 Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.111439 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7c9cd5548d-qsvvl"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.124963 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-config-data\") pod \"3f36a643-d8fe-4f5e-b618-e1e25914628c\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.125923 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-combined-ca-bundle\") pod \"3f36a643-d8fe-4f5e-b618-e1e25914628c\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.125980 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f36a643-d8fe-4f5e-b618-e1e25914628c-logs\") pod \"3f36a643-d8fe-4f5e-b618-e1e25914628c\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.126033 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f36a643-d8fe-4f5e-b618-e1e25914628c-httpd-run\") pod \"3f36a643-d8fe-4f5e-b618-e1e25914628c\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.129638 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvbj\" (UniqueName: \"kubernetes.io/projected/3f36a643-d8fe-4f5e-b618-e1e25914628c-kube-api-access-zkvbj\") pod \"3f36a643-d8fe-4f5e-b618-e1e25914628c\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.129696 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-scripts\") pod \"3f36a643-d8fe-4f5e-b618-e1e25914628c\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.129776 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-internal-tls-certs\") pod \"3f36a643-d8fe-4f5e-b618-e1e25914628c\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.129824 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"3f36a643-d8fe-4f5e-b618-e1e25914628c\" (UID: \"3f36a643-d8fe-4f5e-b618-e1e25914628c\") " Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.136424 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f36a643-d8fe-4f5e-b618-e1e25914628c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3f36a643-d8fe-4f5e-b618-e1e25914628c" (UID: "3f36a643-d8fe-4f5e-b618-e1e25914628c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.136657 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f36a643-d8fe-4f5e-b618-e1e25914628c-logs" (OuterVolumeSpecName: "logs") pod "3f36a643-d8fe-4f5e-b618-e1e25914628c" (UID: "3f36a643-d8fe-4f5e-b618-e1e25914628c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.146353 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f36a643-d8fe-4f5e-b618-e1e25914628c-kube-api-access-zkvbj" (OuterVolumeSpecName: "kube-api-access-zkvbj") pod "3f36a643-d8fe-4f5e-b618-e1e25914628c" (UID: "3f36a643-d8fe-4f5e-b618-e1e25914628c"). InnerVolumeSpecName "kube-api-access-zkvbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.148817 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "3f36a643-d8fe-4f5e-b618-e1e25914628c" (UID: "3f36a643-d8fe-4f5e-b618-e1e25914628c"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.164625 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7c9cd5548d-qsvvl"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.183604 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-scripts" (OuterVolumeSpecName: "scripts") pod "3f36a643-d8fe-4f5e-b618-e1e25914628c" (UID: "3f36a643-d8fe-4f5e-b618-e1e25914628c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.194648 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84586cb599-nkzm2" event={"ID":"3d968ae6-da72-485c-ac6d-393bbc1363da","Type":"ContainerStarted","Data":"6c9d1bf74f49e55277306b0ff7db779b9ac6709622f663f7416440430629322a"} Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.223540 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f94f9658f-ptznm" event={"ID":"f1015b8a-a8a3-4941-8959-2d4fd5aee749","Type":"ContainerStarted","Data":"55434c194034a900ab781043a71564447fc178113a0dbbf7de6bc77993ab1fbe"} Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.233953 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.243830 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f36a643-d8fe-4f5e-b618-e1e25914628c-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.243868 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f36a643-d8fe-4f5e-b618-e1e25914628c-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.243880 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvbj\" (UniqueName: \"kubernetes.io/projected/3f36a643-d8fe-4f5e-b618-e1e25914628c-kube-api-access-zkvbj\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.243887 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.243905 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.306650 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f36a643-d8fe-4f5e-b618-e1e25914628c","Type":"ContainerDied","Data":"05fa1b9642108acf57540f94436dd14391ce8c9e372c765ee156c081c3032e5e"} Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.306964 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.385936 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.388488 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-864b497b7c-2zlp2" event={"ID":"979b6714-8706-4ab6-bd6e-dd127eed8347","Type":"ContainerStarted","Data":"4a6590bf441faa059f7c84af3f57775400dcbd795fff706539a67948ab9c0ac5"} Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.393726 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" event={"ID":"6e39ea81-bdee-475a-87ea-5fbd7c02759f","Type":"ContainerStarted","Data":"d8ede6431d28a684f7db11545895465314723a23c497b347c46c127fe36c43c5"} Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.451642 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.474877 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f36a643-d8fe-4f5e-b618-e1e25914628c" (UID: "3f36a643-d8fe-4f5e-b618-e1e25914628c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.510498 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.510856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7498687b57-vr4xt" event={"ID":"0fbe5346-a173-4dc8-97f3-800ada75bf1b","Type":"ContainerStarted","Data":"2ddfc1ed09cc32d3f69bed90e16cd7afc2d86d1b714229b87bb6a48b0f4c3266"} Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.544396 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3f36a643-d8fe-4f5e-b618-e1e25914628c" (UID: "3f36a643-d8fe-4f5e-b618-e1e25914628c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.546320 4675 scope.go:117] "RemoveContainer" containerID="a37f1a1a8902fdd6c0479b5a4f9f07d860b6c277a5c08afbe2a79a259a61de49" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.558753 4675 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.558800 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.617039 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-f94f9658f-ptznm"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.632779 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-config-data" (OuterVolumeSpecName: "config-data") pod "3f36a643-d8fe-4f5e-b618-e1e25914628c" (UID: "3f36a643-d8fe-4f5e-b618-e1e25914628c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.650414 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5856d7f7bf-rsjh7"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.660817 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f36a643-d8fe-4f5e-b618-e1e25914628c-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.693514 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7498687b57-vr4xt"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.708831 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:59:31 crc kubenswrapper[4675]: E1121 13:59:31.709518 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f36a643-d8fe-4f5e-b618-e1e25914628c" containerName="glance-httpd" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.709536 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f36a643-d8fe-4f5e-b618-e1e25914628c" containerName="glance-httpd" Nov 21 13:59:31 crc kubenswrapper[4675]: E1121 13:59:31.709555 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f36a643-d8fe-4f5e-b618-e1e25914628c" containerName="glance-log" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.709562 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f36a643-d8fe-4f5e-b618-e1e25914628c" containerName="glance-log" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.709913 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f36a643-d8fe-4f5e-b618-e1e25914628c" containerName="glance-httpd" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.709953 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f36a643-d8fe-4f5e-b618-e1e25914628c" containerName="glance-log" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.711414 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.714522 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.714683 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.726343 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-864b497b7c-2zlp2"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.738832 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.777314 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.777590 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.777629 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.777697 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njkbx\" (UniqueName: \"kubernetes.io/projected/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-kube-api-access-njkbx\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.777747 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.777785 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-logs\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.777821 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.777850 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.798141 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.817319 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.841403 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5cb8d89bd7-jsjdv"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.858135 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.862433 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.865544 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.865768 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.871434 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.880368 4675 scope.go:117] "RemoveContainer" containerID="b36d0ea5d1306c394b12fd383e8b0388fdbc8f610967e86e89064936a007db90" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.884555 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.884678 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.884762 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.884871 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.884927 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.885100 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njkbx\" (UniqueName: \"kubernetes.io/projected/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-kube-api-access-njkbx\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.885214 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.885280 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-logs\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.885671 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.886583 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-logs\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.886648 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.906425 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.910008 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wnqhq"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.915602 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.917376 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.919292 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njkbx\" (UniqueName: \"kubernetes.io/projected/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-kube-api-access-njkbx\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.927211 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.935524 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8vslv"] Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.952414 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc\") " pod="openstack/glance-default-external-api-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.987373 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-scripts\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.987528 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.987580 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.987659 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-config-data\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.987770 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-run-httpd\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.987842 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkjdf\" (UniqueName: \"kubernetes.io/projected/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-kube-api-access-kkjdf\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.987887 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-log-httpd\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:31 crc kubenswrapper[4675]: I1121 13:59:31.991217 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lxtx8"] Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.008526 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.072289 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.092062 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.092148 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.092190 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-config-data\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.092248 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-run-httpd\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.092290 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkjdf\" (UniqueName: \"kubernetes.io/projected/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-kube-api-access-kkjdf\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.092310 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-log-httpd\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.092391 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-scripts\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.094929 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-run-httpd\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.095431 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-log-httpd\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.097141 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-scripts\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.097544 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.098595 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.107649 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.109024 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-config-data\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.120564 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkjdf\" (UniqueName: \"kubernetes.io/projected/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-kube-api-access-kkjdf\") pod \"ceilometer-0\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.126089 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.128178 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.135429 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.135666 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.141052 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.273294 4675 scope.go:117] "RemoveContainer" containerID="13c5aa0875116ba8f569356c5c1108991e2aba4aa4786c798e52a3b032e04b35" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.292846 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8e46-account-create-956rv"] Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.308888 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.308994 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxqfg\" (UniqueName: \"kubernetes.io/projected/e08a8ae1-1033-4b31-89df-b85614075cbf-kube-api-access-rxqfg\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.309040 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08a8ae1-1033-4b31-89df-b85614075cbf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.309164 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e08a8ae1-1033-4b31-89df-b85614075cbf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.309200 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e08a8ae1-1033-4b31-89df-b85614075cbf-logs\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.309449 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e08a8ae1-1033-4b31-89df-b85614075cbf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.309491 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e08a8ae1-1033-4b31-89df-b85614075cbf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.309557 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08a8ae1-1033-4b31-89df-b85614075cbf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: W1121 13:59:32.343552 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21f160c6_8942_4e0a_bf07_6c57e7d69175.slice/crio-c8420c8d452b64908c1a9c4ba4074206abbd7f0e67f552cf76644c47b44448c3 WatchSource:0}: Error finding container c8420c8d452b64908c1a9c4ba4074206abbd7f0e67f552cf76644c47b44448c3: Status 404 returned error can't find the container with id c8420c8d452b64908c1a9c4ba4074206abbd7f0e67f552cf76644c47b44448c3 Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.394865 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.398753 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a3b5-account-create-d6mhd"] Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.411484 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e08a8ae1-1033-4b31-89df-b85614075cbf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.411533 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e08a8ae1-1033-4b31-89df-b85614075cbf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.411574 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08a8ae1-1033-4b31-89df-b85614075cbf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.411618 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.411653 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxqfg\" (UniqueName: \"kubernetes.io/projected/e08a8ae1-1033-4b31-89df-b85614075cbf-kube-api-access-rxqfg\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.411679 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08a8ae1-1033-4b31-89df-b85614075cbf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.411710 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e08a8ae1-1033-4b31-89df-b85614075cbf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.411734 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e08a8ae1-1033-4b31-89df-b85614075cbf-logs\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.412022 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.412226 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e08a8ae1-1033-4b31-89df-b85614075cbf-logs\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.412726 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e08a8ae1-1033-4b31-89df-b85614075cbf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.422781 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08a8ae1-1033-4b31-89df-b85614075cbf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.423966 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08a8ae1-1033-4b31-89df-b85614075cbf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.424370 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e08a8ae1-1033-4b31-89df-b85614075cbf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.424980 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e08a8ae1-1033-4b31-89df-b85614075cbf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.444522 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxqfg\" (UniqueName: \"kubernetes.io/projected/e08a8ae1-1033-4b31-89df-b85614075cbf-kube-api-access-rxqfg\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.464051 4675 scope.go:117] "RemoveContainer" containerID="7b399597907711517d0542971cfa5b8956e074a0e36399f16c96ad892ef696a8" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.550735 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8vslv" event={"ID":"028aa8ea-8e30-4ec4-8280-59935c9cf343","Type":"ContainerStarted","Data":"d3a0ee3d3479ee56e9a4044a6bab784c18e1a1550dc197212991575f1f270a26"} Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.553156 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84586cb599-nkzm2" event={"ID":"3d968ae6-da72-485c-ac6d-393bbc1363da","Type":"ContainerStarted","Data":"79dc15a658e99d0bfb91d43d4d00647d8bd2c017779b4d4cdef3bfd09d83dcd0"} Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.554816 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.575217 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" event={"ID":"35b58484-6cb2-4edc-bea9-4d3a8d6b1479","Type":"ContainerStarted","Data":"2e80ac043f83b1a8a5d59b50956b4c6ff8871fad887fac4c86fabce55648cb39"} Nov 21 13:59:32 crc kubenswrapper[4675]: W1121 13:59:32.576345 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8bf22f5_333f_43c5_9666_86ffa5657944.slice/crio-c10ce763a81306cc00f7aa03c08497fcf56897f10eb55cbde7facb7e84c9bb0c WatchSource:0}: Error finding container c10ce763a81306cc00f7aa03c08497fcf56897f10eb55cbde7facb7e84c9bb0c: Status 404 returned error can't find the container with id c10ce763a81306cc00f7aa03c08497fcf56897f10eb55cbde7facb7e84c9bb0c Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.577107 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lxtx8" event={"ID":"da354635-a1e7-4632-90d1-7d0cc2dded63","Type":"ContainerStarted","Data":"3aef7dfefe9ee5ce585777d15a0b91e5d8ae2f3a2489b17eb34853c787489f26"} Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.615846 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.627856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wnqhq" event={"ID":"68ae1285-2384-4da2-803f-9395625e88de","Type":"ContainerStarted","Data":"296fa9acb2423862a441d43313424516e4e18be0d3c9319f15307887d2253035"} Nov 21 13:59:32 crc kubenswrapper[4675]: W1121 13:59:32.644785 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91ece073_0d12_40a6_a6c6_8f40cbc5268f.slice/crio-7ba8818b4b02f4005df48c836ed93c33f875a4ab50d0379f4cce8565227a3b4d WatchSource:0}: Error finding container 7ba8818b4b02f4005df48c836ed93c33f875a4ab50d0379f4cce8565227a3b4d: Status 404 returned error can't find the container with id 7ba8818b4b02f4005df48c836ed93c33f875a4ab50d0379f4cce8565227a3b4d Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.651483 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8e46-account-create-956rv" event={"ID":"21f160c6-8942-4e0a-bf07-6c57e7d69175","Type":"ContainerStarted","Data":"c8420c8d452b64908c1a9c4ba4074206abbd7f0e67f552cf76644c47b44448c3"} Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.653384 4675 scope.go:117] "RemoveContainer" containerID="a16920f6623c44c18f72947a03388fb206f87260eb80bbaaf9dc97629a146bd4" Nov 21 13:59:32 crc kubenswrapper[4675]: W1121 13:59:32.703368 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64cafc2c_04de_4090_9026_2b986fcae86a.slice/crio-44d1963cd10ca30380ceb66d6fb6feefe1237a3ccd8fd349a1fa251674755d8c WatchSource:0}: Error finding container 44d1963cd10ca30380ceb66d6fb6feefe1237a3ccd8fd349a1fa251674755d8c: Status 404 returned error can't find the container with id 44d1963cd10ca30380ceb66d6fb6feefe1237a3ccd8fd349a1fa251674755d8c Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.741264 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-84586cb599-nkzm2" podStartSLOduration=8.741244793 podStartE2EDuration="8.741244793s" podCreationTimestamp="2025-11-21 13:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:32.579365728 +0000 UTC m=+1649.305780455" watchObservedRunningTime="2025-11-21 13:59:32.741244793 +0000 UTC m=+1649.467659520" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.744733 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f7f0-account-create-9jvch"] Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.845089 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"e08a8ae1-1033-4b31-89df-b85614075cbf\") " pod="openstack/glance-default-internal-api-0" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.886196 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a9c792a-f9a3-416b-b131-ac61338200da" path="/var/lib/kubelet/pods/1a9c792a-f9a3-416b-b131-ac61338200da/volumes" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.887343 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f36a643-d8fe-4f5e-b618-e1e25914628c" path="/var/lib/kubelet/pods/3f36a643-d8fe-4f5e-b618-e1e25914628c/volumes" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.888766 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a4bbbf-724c-4ec4-95ce-0bc8395012f7" path="/var/lib/kubelet/pods/55a4bbbf-724c-4ec4-95ce-0bc8395012f7/volumes" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.889604 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="597a2793-4d69-4558-906f-ee005618985e" path="/var/lib/kubelet/pods/597a2793-4d69-4558-906f-ee005618985e/volumes" Nov 21 13:59:32 crc kubenswrapper[4675]: I1121 13:59:32.890440 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f3372bb-4733-4de4-b579-e9ede0ce2ed4" path="/var/lib/kubelet/pods/6f3372bb-4733-4de4-b579-e9ede0ce2ed4/volumes" Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.086353 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.139137 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 13:59:33 crc kubenswrapper[4675]: W1121 13:59:33.222011 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcb2db9c_ab36_40ae_8a0a_e3a48a9b92bc.slice/crio-bfda26b0f5d6dd1011cf5de34995e6ff476d98e1b2698422fe96dd5cfbadab33 WatchSource:0}: Error finding container bfda26b0f5d6dd1011cf5de34995e6ff476d98e1b2698422fe96dd5cfbadab33: Status 404 returned error can't find the container with id bfda26b0f5d6dd1011cf5de34995e6ff476d98e1b2698422fe96dd5cfbadab33 Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.368700 4675 scope.go:117] "RemoveContainer" containerID="a2eab34636acae345c92e2d8d0fd78a5a04945f4676ff2ad7f9782407e9d14cb" Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.434874 4675 scope.go:117] "RemoveContainer" containerID="c098e4bd6707d7f500240a9676157dd14fcc74c844b5aa3d7316c14abb374ca6" Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.496328 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.515000 4675 scope.go:117] "RemoveContainer" containerID="b08a2793e7f8a34083efd3edfe08bcd77a57c0b062fe7d0c64bb2c77cbe6829c" Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.723703 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f7f0-account-create-9jvch" event={"ID":"91ece073-0d12-40a6-a6c6-8f40cbc5268f","Type":"ContainerStarted","Data":"7ba8818b4b02f4005df48c836ed93c33f875a4ab50d0379f4cce8565227a3b4d"} Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.730431 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" event={"ID":"35b58484-6cb2-4edc-bea9-4d3a8d6b1479","Type":"ContainerStarted","Data":"60cb70e3ed12d9659cbf6d54f3ed52bbe90c005e335b407d01ae75eb8654dd9b"} Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.735505 4675 generic.go:334] "Generic (PLEG): container finished" podID="979b6714-8706-4ab6-bd6e-dd127eed8347" containerID="b6983cd6a242532a73e9887a4035526f21fecbacc0c2d498694ea70c0649bd44" exitCode=1 Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.735586 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-864b497b7c-2zlp2" event={"ID":"979b6714-8706-4ab6-bd6e-dd127eed8347","Type":"ContainerDied","Data":"b6983cd6a242532a73e9887a4035526f21fecbacc0c2d498694ea70c0649bd44"} Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.736299 4675 scope.go:117] "RemoveContainer" containerID="b6983cd6a242532a73e9887a4035526f21fecbacc0c2d498694ea70c0649bd44" Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.740218 4675 generic.go:334] "Generic (PLEG): container finished" podID="6e39ea81-bdee-475a-87ea-5fbd7c02759f" containerID="b58a03cfcc1e87983a58d06a74f780850673403bf46a0be305c56e36155ec694" exitCode=1 Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.741023 4675 scope.go:117] "RemoveContainer" containerID="b58a03cfcc1e87983a58d06a74f780850673403bf46a0be305c56e36155ec694" Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.741407 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" event={"ID":"6e39ea81-bdee-475a-87ea-5fbd7c02759f","Type":"ContainerDied","Data":"b58a03cfcc1e87983a58d06a74f780850673403bf46a0be305c56e36155ec694"} Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.743498 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a3b5-account-create-d6mhd" event={"ID":"f8bf22f5-333f-43c5-9666-86ffa5657944","Type":"ContainerStarted","Data":"c10ce763a81306cc00f7aa03c08497fcf56897f10eb55cbde7facb7e84c9bb0c"} Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.745105 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7aab9c0-e5af-48f5-895a-1e560b3ddb35","Type":"ContainerStarted","Data":"9a0fb8f30f2e42cd9c7fbf8b3f9c6a11e59f354df32e0b2c6f76c24ff185f736"} Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.746314 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc","Type":"ContainerStarted","Data":"bfda26b0f5d6dd1011cf5de34995e6ff476d98e1b2698422fe96dd5cfbadab33"} Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.747865 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wnqhq" event={"ID":"68ae1285-2384-4da2-803f-9395625e88de","Type":"ContainerStarted","Data":"61a97fb024614fff96db2c59d2e2bcfd35aea206500ace4ce0294a52edf17b02"} Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.784124 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"64cafc2c-04de-4090-9026-2b986fcae86a","Type":"ContainerStarted","Data":"44d1963cd10ca30380ceb66d6fb6feefe1237a3ccd8fd349a1fa251674755d8c"} Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.786227 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-wnqhq" podStartSLOduration=4.786212914 podStartE2EDuration="4.786212914s" podCreationTimestamp="2025-11-21 13:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:33.775602549 +0000 UTC m=+1650.502017276" watchObservedRunningTime="2025-11-21 13:59:33.786212914 +0000 UTC m=+1650.512627641" Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.792508 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7498687b57-vr4xt" event={"ID":"0fbe5346-a173-4dc8-97f3-800ada75bf1b","Type":"ContainerStarted","Data":"8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc"} Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.792549 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.846749 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7498687b57-vr4xt" podStartSLOduration=11.846732366 podStartE2EDuration="11.846732366s" podCreationTimestamp="2025-11-21 13:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:33.822847309 +0000 UTC m=+1650.549262036" watchObservedRunningTime="2025-11-21 13:59:33.846732366 +0000 UTC m=+1650.573147103" Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.853595 4675 scope.go:117] "RemoveContainer" containerID="8472e0b81aaeca11e557f8ee1319039a7c7651ab01f69f50f81e85bfac90e43f" Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.918102 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 13:59:33 crc kubenswrapper[4675]: I1121 13:59:33.940507 4675 scope.go:117] "RemoveContainer" containerID="60cad6006b01a90cf51d23a40f106d8666a0d3eaddd388aff83fb51c8c7fd079" Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.177306 4675 scope.go:117] "RemoveContainer" containerID="a9f76b2154c6ee134725d91b1f6d072e62ab28816574c37c271fe38071471768" Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.437841 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.592525 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.831407 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f7f0-account-create-9jvch" event={"ID":"91ece073-0d12-40a6-a6c6-8f40cbc5268f","Type":"ContainerStarted","Data":"7ec7edb9dbfb5d3fb0593e4de1ec5c2cd5ec76c146fe17fe64c13abe40a3f38f"} Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.833818 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lxtx8" event={"ID":"da354635-a1e7-4632-90d1-7d0cc2dded63","Type":"ContainerStarted","Data":"2da31482327ca790c9c56081aac902bdef806cce97741ba884ecf928fce6e019"} Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.834959 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8vslv" event={"ID":"028aa8ea-8e30-4ec4-8280-59935c9cf343","Type":"ContainerStarted","Data":"dfa236b5688ff36bc64172535794600694e27dec11a231e8e1d8b5b889e3fa5e"} Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.836684 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e08a8ae1-1033-4b31-89df-b85614075cbf","Type":"ContainerStarted","Data":"c0cdeb6dc96893c48b70bd0ba7c8a139f9ecea90b6468e7cf3c62ea2bdc8fc3d"} Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.847834 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" event={"ID":"35b58484-6cb2-4edc-bea9-4d3a8d6b1479","Type":"ContainerStarted","Data":"f017eee70a9e34e47683d4525b074ca85edaf8ae0487992c157db0dbb2f1735b"} Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.851227 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-f7f0-account-create-9jvch" podStartSLOduration=5.851210674 podStartE2EDuration="5.851210674s" podCreationTimestamp="2025-11-21 13:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:34.845432809 +0000 UTC m=+1651.571847536" watchObservedRunningTime="2025-11-21 13:59:34.851210674 +0000 UTC m=+1651.577625401" Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.863249 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-lxtx8" podStartSLOduration=5.863229974 podStartE2EDuration="5.863229974s" podCreationTimestamp="2025-11-21 13:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:34.860298191 +0000 UTC m=+1651.586712918" watchObservedRunningTime="2025-11-21 13:59:34.863229974 +0000 UTC m=+1651.589644701" Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.876268 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8e46-account-create-956rv" event={"ID":"21f160c6-8942-4e0a-bf07-6c57e7d69175","Type":"ContainerStarted","Data":"c3bd7e7ca3cadae95e7db90ab5002fd269339adac850380576e51c757eb86d69"} Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.892165 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.892215 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a3b5-account-create-d6mhd" event={"ID":"f8bf22f5-333f-43c5-9666-86ffa5657944","Type":"ContainerStarted","Data":"14473afe41f334940c7118664396a74767fb83e70465bd7322ded67e030f0e98"} Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.892293 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.892304 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f94f9658f-ptznm" event={"ID":"f1015b8a-a8a3-4941-8959-2d4fd5aee749","Type":"ContainerStarted","Data":"1103c8bebf280470e3fecf67553e68cb907519aa161d89fb985298e4547f045a"} Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.893220 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.893766 4675 generic.go:334] "Generic (PLEG): container finished" podID="68ae1285-2384-4da2-803f-9395625e88de" containerID="61a97fb024614fff96db2c59d2e2bcfd35aea206500ace4ce0294a52edf17b02" exitCode=0 Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.895259 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wnqhq" event={"ID":"68ae1285-2384-4da2-803f-9395625e88de","Type":"ContainerDied","Data":"61a97fb024614fff96db2c59d2e2bcfd35aea206500ace4ce0294a52edf17b02"} Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.912821 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-8vslv" podStartSLOduration=5.912803453 podStartE2EDuration="5.912803453s" podCreationTimestamp="2025-11-21 13:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:34.872966077 +0000 UTC m=+1651.599380804" watchObservedRunningTime="2025-11-21 13:59:34.912803453 +0000 UTC m=+1651.639218180" Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.970566 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" podStartSLOduration=12.970545756 podStartE2EDuration="12.970545756s" podCreationTimestamp="2025-11-21 13:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:34.962174966 +0000 UTC m=+1651.688589693" watchObservedRunningTime="2025-11-21 13:59:34.970545756 +0000 UTC m=+1651.696960483" Nov 21 13:59:34 crc kubenswrapper[4675]: I1121 13:59:34.991159 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-8e46-account-create-956rv" podStartSLOduration=4.99113995 podStartE2EDuration="4.99113995s" podCreationTimestamp="2025-11-21 13:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:34.990439603 +0000 UTC m=+1651.716854330" watchObservedRunningTime="2025-11-21 13:59:34.99113995 +0000 UTC m=+1651.717554677" Nov 21 13:59:35 crc kubenswrapper[4675]: E1121 13:59:35.043521 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-conmon-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:59:35 crc kubenswrapper[4675]: I1121 13:59:35.110130 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-f94f9658f-ptznm" podStartSLOduration=11.110107753 podStartE2EDuration="11.110107753s" podCreationTimestamp="2025-11-21 13:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:35.091752694 +0000 UTC m=+1651.818167421" watchObservedRunningTime="2025-11-21 13:59:35.110107753 +0000 UTC m=+1651.836522480" Nov 21 13:59:35 crc kubenswrapper[4675]: I1121 13:59:35.129539 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-a3b5-account-create-d6mhd" podStartSLOduration=5.129519468 podStartE2EDuration="5.129519468s" podCreationTimestamp="2025-11-21 13:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:35.105160059 +0000 UTC m=+1651.831574786" watchObservedRunningTime="2025-11-21 13:59:35.129519468 +0000 UTC m=+1651.855934195" Nov 21 13:59:36 crc kubenswrapper[4675]: I1121 13:59:36.344420 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wnqhq" Nov 21 13:59:36 crc kubenswrapper[4675]: I1121 13:59:36.459010 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-588hb\" (UniqueName: \"kubernetes.io/projected/68ae1285-2384-4da2-803f-9395625e88de-kube-api-access-588hb\") pod \"68ae1285-2384-4da2-803f-9395625e88de\" (UID: \"68ae1285-2384-4da2-803f-9395625e88de\") " Nov 21 13:59:36 crc kubenswrapper[4675]: I1121 13:59:36.459399 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68ae1285-2384-4da2-803f-9395625e88de-operator-scripts\") pod \"68ae1285-2384-4da2-803f-9395625e88de\" (UID: \"68ae1285-2384-4da2-803f-9395625e88de\") " Nov 21 13:59:36 crc kubenswrapper[4675]: I1121 13:59:36.460209 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68ae1285-2384-4da2-803f-9395625e88de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68ae1285-2384-4da2-803f-9395625e88de" (UID: "68ae1285-2384-4da2-803f-9395625e88de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:36 crc kubenswrapper[4675]: I1121 13:59:36.464593 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ae1285-2384-4da2-803f-9395625e88de-kube-api-access-588hb" (OuterVolumeSpecName: "kube-api-access-588hb") pod "68ae1285-2384-4da2-803f-9395625e88de" (UID: "68ae1285-2384-4da2-803f-9395625e88de"). InnerVolumeSpecName "kube-api-access-588hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:36 crc kubenswrapper[4675]: I1121 13:59:36.561972 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-588hb\" (UniqueName: \"kubernetes.io/projected/68ae1285-2384-4da2-803f-9395625e88de-kube-api-access-588hb\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:36 crc kubenswrapper[4675]: I1121 13:59:36.562019 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68ae1285-2384-4da2-803f-9395625e88de-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:36 crc kubenswrapper[4675]: I1121 13:59:36.929872 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wnqhq" Nov 21 13:59:36 crc kubenswrapper[4675]: I1121 13:59:36.929892 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wnqhq" event={"ID":"68ae1285-2384-4da2-803f-9395625e88de","Type":"ContainerDied","Data":"296fa9acb2423862a441d43313424516e4e18be0d3c9319f15307887d2253035"} Nov 21 13:59:36 crc kubenswrapper[4675]: I1121 13:59:36.931031 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="296fa9acb2423862a441d43313424516e4e18be0d3c9319f15307887d2253035" Nov 21 13:59:37 crc kubenswrapper[4675]: I1121 13:59:37.951352 4675 generic.go:334] "Generic (PLEG): container finished" podID="91ece073-0d12-40a6-a6c6-8f40cbc5268f" containerID="7ec7edb9dbfb5d3fb0593e4de1ec5c2cd5ec76c146fe17fe64c13abe40a3f38f" exitCode=0 Nov 21 13:59:37 crc kubenswrapper[4675]: I1121 13:59:37.951447 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f7f0-account-create-9jvch" event={"ID":"91ece073-0d12-40a6-a6c6-8f40cbc5268f","Type":"ContainerDied","Data":"7ec7edb9dbfb5d3fb0593e4de1ec5c2cd5ec76c146fe17fe64c13abe40a3f38f"} Nov 21 13:59:37 crc kubenswrapper[4675]: I1121 13:59:37.957439 4675 generic.go:334] "Generic (PLEG): container finished" podID="6e39ea81-bdee-475a-87ea-5fbd7c02759f" containerID="f7821cd6d326a53f10d48975a36b2eb5f10a6257ba7325bef059de9fdf1ad47d" exitCode=1 Nov 21 13:59:37 crc kubenswrapper[4675]: I1121 13:59:37.957583 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" event={"ID":"6e39ea81-bdee-475a-87ea-5fbd7c02759f","Type":"ContainerDied","Data":"f7821cd6d326a53f10d48975a36b2eb5f10a6257ba7325bef059de9fdf1ad47d"} Nov 21 13:59:37 crc kubenswrapper[4675]: I1121 13:59:37.957640 4675 scope.go:117] "RemoveContainer" containerID="b58a03cfcc1e87983a58d06a74f780850673403bf46a0be305c56e36155ec694" Nov 21 13:59:37 crc kubenswrapper[4675]: I1121 13:59:37.958465 4675 scope.go:117] "RemoveContainer" containerID="f7821cd6d326a53f10d48975a36b2eb5f10a6257ba7325bef059de9fdf1ad47d" Nov 21 13:59:37 crc kubenswrapper[4675]: E1121 13:59:37.958850 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5856d7f7bf-rsjh7_openstack(6e39ea81-bdee-475a-87ea-5fbd7c02759f)\"" pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" podUID="6e39ea81-bdee-475a-87ea-5fbd7c02759f" Nov 21 13:59:37 crc kubenswrapper[4675]: I1121 13:59:37.962303 4675 generic.go:334] "Generic (PLEG): container finished" podID="21f160c6-8942-4e0a-bf07-6c57e7d69175" containerID="c3bd7e7ca3cadae95e7db90ab5002fd269339adac850380576e51c757eb86d69" exitCode=0 Nov 21 13:59:37 crc kubenswrapper[4675]: I1121 13:59:37.962371 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8e46-account-create-956rv" event={"ID":"21f160c6-8942-4e0a-bf07-6c57e7d69175","Type":"ContainerDied","Data":"c3bd7e7ca3cadae95e7db90ab5002fd269339adac850380576e51c757eb86d69"} Nov 21 13:59:37 crc kubenswrapper[4675]: I1121 13:59:37.976373 4675 generic.go:334] "Generic (PLEG): container finished" podID="f8bf22f5-333f-43c5-9666-86ffa5657944" containerID="14473afe41f334940c7118664396a74767fb83e70465bd7322ded67e030f0e98" exitCode=0 Nov 21 13:59:37 crc kubenswrapper[4675]: I1121 13:59:37.976434 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a3b5-account-create-d6mhd" event={"ID":"f8bf22f5-333f-43c5-9666-86ffa5657944","Type":"ContainerDied","Data":"14473afe41f334940c7118664396a74767fb83e70465bd7322ded67e030f0e98"} Nov 21 13:59:37 crc kubenswrapper[4675]: I1121 13:59:37.982403 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"64cafc2c-04de-4090-9026-2b986fcae86a","Type":"ContainerStarted","Data":"ca7c84f3000b67fa97da20232cc37bb57b6d077e4dd54eb87948f55b8e39f720"} Nov 21 13:59:37 crc kubenswrapper[4675]: I1121 13:59:37.985383 4675 generic.go:334] "Generic (PLEG): container finished" podID="da354635-a1e7-4632-90d1-7d0cc2dded63" containerID="2da31482327ca790c9c56081aac902bdef806cce97741ba884ecf928fce6e019" exitCode=0 Nov 21 13:59:37 crc kubenswrapper[4675]: I1121 13:59:37.985469 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lxtx8" event={"ID":"da354635-a1e7-4632-90d1-7d0cc2dded63","Type":"ContainerDied","Data":"2da31482327ca790c9c56081aac902bdef806cce97741ba884ecf928fce6e019"} Nov 21 13:59:38 crc kubenswrapper[4675]: I1121 13:59:38.015937 4675 generic.go:334] "Generic (PLEG): container finished" podID="028aa8ea-8e30-4ec4-8280-59935c9cf343" containerID="dfa236b5688ff36bc64172535794600694e27dec11a231e8e1d8b5b889e3fa5e" exitCode=0 Nov 21 13:59:38 crc kubenswrapper[4675]: I1121 13:59:38.016160 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8vslv" event={"ID":"028aa8ea-8e30-4ec4-8280-59935c9cf343","Type":"ContainerDied","Data":"dfa236b5688ff36bc64172535794600694e27dec11a231e8e1d8b5b889e3fa5e"} Nov 21 13:59:38 crc kubenswrapper[4675]: I1121 13:59:38.022111 4675 generic.go:334] "Generic (PLEG): container finished" podID="979b6714-8706-4ab6-bd6e-dd127eed8347" containerID="37d604d0466361c4ddd643514f3cc00ce019cc0430961f1a085ae3e341a11378" exitCode=1 Nov 21 13:59:38 crc kubenswrapper[4675]: I1121 13:59:38.022188 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-864b497b7c-2zlp2" event={"ID":"979b6714-8706-4ab6-bd6e-dd127eed8347","Type":"ContainerDied","Data":"37d604d0466361c4ddd643514f3cc00ce019cc0430961f1a085ae3e341a11378"} Nov 21 13:59:38 crc kubenswrapper[4675]: I1121 13:59:38.022995 4675 scope.go:117] "RemoveContainer" containerID="37d604d0466361c4ddd643514f3cc00ce019cc0430961f1a085ae3e341a11378" Nov 21 13:59:38 crc kubenswrapper[4675]: E1121 13:59:38.023461 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-864b497b7c-2zlp2_openstack(979b6714-8706-4ab6-bd6e-dd127eed8347)\"" pod="openstack/heat-api-864b497b7c-2zlp2" podUID="979b6714-8706-4ab6-bd6e-dd127eed8347" Nov 21 13:59:38 crc kubenswrapper[4675]: I1121 13:59:38.035560 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e08a8ae1-1033-4b31-89df-b85614075cbf","Type":"ContainerStarted","Data":"9d48aacbdf759c23169d3157bb177d1ab24af7e7fd69613b8629841bc3c6d70b"} Nov 21 13:59:38 crc kubenswrapper[4675]: I1121 13:59:38.045322 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc","Type":"ContainerStarted","Data":"bb83f9bb40b4141dcd19fbeda8f12d4237a79b0aac5f05aee7d6f83c76112308"} Nov 21 13:59:38 crc kubenswrapper[4675]: I1121 13:59:38.113547 4675 scope.go:117] "RemoveContainer" containerID="b6983cd6a242532a73e9887a4035526f21fecbacc0c2d498694ea70c0649bd44" Nov 21 13:59:38 crc kubenswrapper[4675]: I1121 13:59:38.322868 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:38 crc kubenswrapper[4675]: I1121 13:59:38.323340 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:38 crc kubenswrapper[4675]: I1121 13:59:38.346629 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:38 crc kubenswrapper[4675]: I1121 13:59:38.346675 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.064558 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e08a8ae1-1033-4b31-89df-b85614075cbf","Type":"ContainerStarted","Data":"33ba0c01caf427d25f24b8f91c980c44e28b2c975f4fdbe81a3ab3cbc11b2dd2"} Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.074009 4675 scope.go:117] "RemoveContainer" containerID="f7821cd6d326a53f10d48975a36b2eb5f10a6257ba7325bef059de9fdf1ad47d" Nov 21 13:59:39 crc kubenswrapper[4675]: E1121 13:59:39.074780 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5856d7f7bf-rsjh7_openstack(6e39ea81-bdee-475a-87ea-5fbd7c02759f)\"" pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" podUID="6e39ea81-bdee-475a-87ea-5fbd7c02759f" Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.077196 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7aab9c0-e5af-48f5-895a-1e560b3ddb35","Type":"ContainerStarted","Data":"9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252"} Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.079678 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"64cafc2c-04de-4090-9026-2b986fcae86a","Type":"ContainerStarted","Data":"81f9095315fb10fdf36bd083015df7e9b89b575ad7a434e34999d3e837b76f1a"} Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.079821 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.083274 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc","Type":"ContainerStarted","Data":"639baf2e906b1c6a2e4ead03bcf94b3918d1198e503cc8a4452fa6a65e4e8d12"} Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.085825 4675 scope.go:117] "RemoveContainer" containerID="37d604d0466361c4ddd643514f3cc00ce019cc0430961f1a085ae3e341a11378" Nov 21 13:59:39 crc kubenswrapper[4675]: E1121 13:59:39.086203 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-864b497b7c-2zlp2_openstack(979b6714-8706-4ab6-bd6e-dd127eed8347)\"" pod="openstack/heat-api-864b497b7c-2zlp2" podUID="979b6714-8706-4ab6-bd6e-dd127eed8347" Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.113154 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.113133394 podStartE2EDuration="7.113133394s" podCreationTimestamp="2025-11-21 13:59:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:39.09854971 +0000 UTC m=+1655.824964457" watchObservedRunningTime="2025-11-21 13:59:39.113133394 +0000 UTC m=+1655.839548111" Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.146993 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.14697471 podStartE2EDuration="9.14697471s" podCreationTimestamp="2025-11-21 13:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:39.13817071 +0000 UTC m=+1655.864585437" watchObservedRunningTime="2025-11-21 13:59:39.14697471 +0000 UTC m=+1655.873389437" Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.166019 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.166000775 podStartE2EDuration="9.166000775s" podCreationTimestamp="2025-11-21 13:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:39.16378327 +0000 UTC m=+1655.890197997" watchObservedRunningTime="2025-11-21 13:59:39.166000775 +0000 UTC m=+1655.892415502" Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.647325 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-69bbd8cb64-j4z85" podUID="6f3372bb-4733-4de4-b579-e9ede0ce2ed4" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.206:8000/healthcheck\": dial tcp 10.217.0.206:8000: i/o timeout (Client.Timeout exceeded while awaiting headers)" Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.694150 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7c9cd5548d-qsvvl" podUID="55a4bbbf-724c-4ec4-95ce-0bc8395012f7" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.207:8004/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.815739 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a3b5-account-create-d6mhd" Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.854367 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 13:59:39 crc kubenswrapper[4675]: E1121 13:59:39.854978 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.969478 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8bf22f5-333f-43c5-9666-86ffa5657944-operator-scripts\") pod \"f8bf22f5-333f-43c5-9666-86ffa5657944\" (UID: \"f8bf22f5-333f-43c5-9666-86ffa5657944\") " Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.969792 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4lbz\" (UniqueName: \"kubernetes.io/projected/f8bf22f5-333f-43c5-9666-86ffa5657944-kube-api-access-l4lbz\") pod \"f8bf22f5-333f-43c5-9666-86ffa5657944\" (UID: \"f8bf22f5-333f-43c5-9666-86ffa5657944\") " Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.972272 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8bf22f5-333f-43c5-9666-86ffa5657944-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8bf22f5-333f-43c5-9666-86ffa5657944" (UID: "f8bf22f5-333f-43c5-9666-86ffa5657944"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:39 crc kubenswrapper[4675]: I1121 13:59:39.980602 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8bf22f5-333f-43c5-9666-86ffa5657944-kube-api-access-l4lbz" (OuterVolumeSpecName: "kube-api-access-l4lbz") pod "f8bf22f5-333f-43c5-9666-86ffa5657944" (UID: "f8bf22f5-333f-43c5-9666-86ffa5657944"). InnerVolumeSpecName "kube-api-access-l4lbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.031367 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8e46-account-create-956rv" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.054721 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f7f0-account-create-9jvch" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.076058 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lxtx8" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.077298 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8bf22f5-333f-43c5-9666-86ffa5657944-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.077331 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4lbz\" (UniqueName: \"kubernetes.io/projected/f8bf22f5-333f-43c5-9666-86ffa5657944-kube-api-access-l4lbz\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.093296 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8vslv" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.118313 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f7f0-account-create-9jvch" event={"ID":"91ece073-0d12-40a6-a6c6-8f40cbc5268f","Type":"ContainerDied","Data":"7ba8818b4b02f4005df48c836ed93c33f875a4ab50d0379f4cce8565227a3b4d"} Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.118349 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba8818b4b02f4005df48c836ed93c33f875a4ab50d0379f4cce8565227a3b4d" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.118328 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f7f0-account-create-9jvch" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.121873 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lxtx8" event={"ID":"da354635-a1e7-4632-90d1-7d0cc2dded63","Type":"ContainerDied","Data":"3aef7dfefe9ee5ce585777d15a0b91e5d8ae2f3a2489b17eb34853c787489f26"} Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.121913 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aef7dfefe9ee5ce585777d15a0b91e5d8ae2f3a2489b17eb34853c787489f26" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.121954 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lxtx8" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.126630 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8vslv" event={"ID":"028aa8ea-8e30-4ec4-8280-59935c9cf343","Type":"ContainerDied","Data":"d3a0ee3d3479ee56e9a4044a6bab784c18e1a1550dc197212991575f1f270a26"} Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.126690 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3a0ee3d3479ee56e9a4044a6bab784c18e1a1550dc197212991575f1f270a26" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.126747 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8vslv" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.130711 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8e46-account-create-956rv" event={"ID":"21f160c6-8942-4e0a-bf07-6c57e7d69175","Type":"ContainerDied","Data":"c8420c8d452b64908c1a9c4ba4074206abbd7f0e67f552cf76644c47b44448c3"} Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.130752 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8420c8d452b64908c1a9c4ba4074206abbd7f0e67f552cf76644c47b44448c3" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.130903 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8e46-account-create-956rv" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.135358 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a3b5-account-create-d6mhd" event={"ID":"f8bf22f5-333f-43c5-9666-86ffa5657944","Type":"ContainerDied","Data":"c10ce763a81306cc00f7aa03c08497fcf56897f10eb55cbde7facb7e84c9bb0c"} Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.135561 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c10ce763a81306cc00f7aa03c08497fcf56897f10eb55cbde7facb7e84c9bb0c" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.135571 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a3b5-account-create-d6mhd" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.142414 4675 scope.go:117] "RemoveContainer" containerID="f7821cd6d326a53f10d48975a36b2eb5f10a6257ba7325bef059de9fdf1ad47d" Nov 21 13:59:40 crc kubenswrapper[4675]: E1121 13:59:40.142799 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5856d7f7bf-rsjh7_openstack(6e39ea81-bdee-475a-87ea-5fbd7c02759f)\"" pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" podUID="6e39ea81-bdee-475a-87ea-5fbd7c02759f" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.143860 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7aab9c0-e5af-48f5-895a-1e560b3ddb35","Type":"ContainerStarted","Data":"e58df098c1b1e0b6310c3f4a5769c42acc744031045f8eaf095ce466e8fe99fd"} Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.143901 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7aab9c0-e5af-48f5-895a-1e560b3ddb35","Type":"ContainerStarted","Data":"5e599a4629186f759009d26736dadd4aaf4a70970f46432218fa437349d9752c"} Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.144250 4675 scope.go:117] "RemoveContainer" containerID="37d604d0466361c4ddd643514f3cc00ce019cc0430961f1a085ae3e341a11378" Nov 21 13:59:40 crc kubenswrapper[4675]: E1121 13:59:40.144447 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-864b497b7c-2zlp2_openstack(979b6714-8706-4ab6-bd6e-dd127eed8347)\"" pod="openstack/heat-api-864b497b7c-2zlp2" podUID="979b6714-8706-4ab6-bd6e-dd127eed8347" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.177867 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw6vk\" (UniqueName: \"kubernetes.io/projected/21f160c6-8942-4e0a-bf07-6c57e7d69175-kube-api-access-kw6vk\") pod \"21f160c6-8942-4e0a-bf07-6c57e7d69175\" (UID: \"21f160c6-8942-4e0a-bf07-6c57e7d69175\") " Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.178320 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ht78\" (UniqueName: \"kubernetes.io/projected/028aa8ea-8e30-4ec4-8280-59935c9cf343-kube-api-access-8ht78\") pod \"028aa8ea-8e30-4ec4-8280-59935c9cf343\" (UID: \"028aa8ea-8e30-4ec4-8280-59935c9cf343\") " Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.178439 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtkk8\" (UniqueName: \"kubernetes.io/projected/da354635-a1e7-4632-90d1-7d0cc2dded63-kube-api-access-xtkk8\") pod \"da354635-a1e7-4632-90d1-7d0cc2dded63\" (UID: \"da354635-a1e7-4632-90d1-7d0cc2dded63\") " Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.178652 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91ece073-0d12-40a6-a6c6-8f40cbc5268f-operator-scripts\") pod \"91ece073-0d12-40a6-a6c6-8f40cbc5268f\" (UID: \"91ece073-0d12-40a6-a6c6-8f40cbc5268f\") " Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.178799 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f160c6-8942-4e0a-bf07-6c57e7d69175-operator-scripts\") pod \"21f160c6-8942-4e0a-bf07-6c57e7d69175\" (UID: \"21f160c6-8942-4e0a-bf07-6c57e7d69175\") " Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.178942 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/028aa8ea-8e30-4ec4-8280-59935c9cf343-operator-scripts\") pod \"028aa8ea-8e30-4ec4-8280-59935c9cf343\" (UID: \"028aa8ea-8e30-4ec4-8280-59935c9cf343\") " Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.179203 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ece073-0d12-40a6-a6c6-8f40cbc5268f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91ece073-0d12-40a6-a6c6-8f40cbc5268f" (UID: "91ece073-0d12-40a6-a6c6-8f40cbc5268f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.179305 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21f160c6-8942-4e0a-bf07-6c57e7d69175-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21f160c6-8942-4e0a-bf07-6c57e7d69175" (UID: "21f160c6-8942-4e0a-bf07-6c57e7d69175"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.179396 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/028aa8ea-8e30-4ec4-8280-59935c9cf343-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "028aa8ea-8e30-4ec4-8280-59935c9cf343" (UID: "028aa8ea-8e30-4ec4-8280-59935c9cf343"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.179757 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhkvt\" (UniqueName: \"kubernetes.io/projected/91ece073-0d12-40a6-a6c6-8f40cbc5268f-kube-api-access-lhkvt\") pod \"91ece073-0d12-40a6-a6c6-8f40cbc5268f\" (UID: \"91ece073-0d12-40a6-a6c6-8f40cbc5268f\") " Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.179875 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da354635-a1e7-4632-90d1-7d0cc2dded63-operator-scripts\") pod \"da354635-a1e7-4632-90d1-7d0cc2dded63\" (UID: \"da354635-a1e7-4632-90d1-7d0cc2dded63\") " Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.180678 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da354635-a1e7-4632-90d1-7d0cc2dded63-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da354635-a1e7-4632-90d1-7d0cc2dded63" (UID: "da354635-a1e7-4632-90d1-7d0cc2dded63"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.181530 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91ece073-0d12-40a6-a6c6-8f40cbc5268f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.181640 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f160c6-8942-4e0a-bf07-6c57e7d69175-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.181726 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/028aa8ea-8e30-4ec4-8280-59935c9cf343-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.183303 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da354635-a1e7-4632-90d1-7d0cc2dded63-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.183668 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f160c6-8942-4e0a-bf07-6c57e7d69175-kube-api-access-kw6vk" (OuterVolumeSpecName: "kube-api-access-kw6vk") pod "21f160c6-8942-4e0a-bf07-6c57e7d69175" (UID: "21f160c6-8942-4e0a-bf07-6c57e7d69175"). InnerVolumeSpecName "kube-api-access-kw6vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.186348 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ece073-0d12-40a6-a6c6-8f40cbc5268f-kube-api-access-lhkvt" (OuterVolumeSpecName: "kube-api-access-lhkvt") pod "91ece073-0d12-40a6-a6c6-8f40cbc5268f" (UID: "91ece073-0d12-40a6-a6c6-8f40cbc5268f"). InnerVolumeSpecName "kube-api-access-lhkvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.186412 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da354635-a1e7-4632-90d1-7d0cc2dded63-kube-api-access-xtkk8" (OuterVolumeSpecName: "kube-api-access-xtkk8") pod "da354635-a1e7-4632-90d1-7d0cc2dded63" (UID: "da354635-a1e7-4632-90d1-7d0cc2dded63"). InnerVolumeSpecName "kube-api-access-xtkk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.186438 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/028aa8ea-8e30-4ec4-8280-59935c9cf343-kube-api-access-8ht78" (OuterVolumeSpecName: "kube-api-access-8ht78") pod "028aa8ea-8e30-4ec4-8280-59935c9cf343" (UID: "028aa8ea-8e30-4ec4-8280-59935c9cf343"). InnerVolumeSpecName "kube-api-access-8ht78". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.287682 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhkvt\" (UniqueName: \"kubernetes.io/projected/91ece073-0d12-40a6-a6c6-8f40cbc5268f-kube-api-access-lhkvt\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.287953 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw6vk\" (UniqueName: \"kubernetes.io/projected/21f160c6-8942-4e0a-bf07-6c57e7d69175-kube-api-access-kw6vk\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.287967 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ht78\" (UniqueName: \"kubernetes.io/projected/028aa8ea-8e30-4ec4-8280-59935c9cf343-kube-api-access-8ht78\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:40 crc kubenswrapper[4675]: I1121 13:59:40.287982 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtkk8\" (UniqueName: \"kubernetes.io/projected/da354635-a1e7-4632-90d1-7d0cc2dded63-kube-api-access-xtkk8\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.010146 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.010513 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.049190 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.074545 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.171033 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="ceilometer-central-agent" containerID="cri-o://9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252" gracePeriod=30 Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.171263 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7aab9c0-e5af-48f5-895a-1e560b3ddb35","Type":"ContainerStarted","Data":"6583c4976503bdfed89cf7d8378099dd6800f2faf84c7bd35b70847568ea36ea"} Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.171345 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.171435 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="proxy-httpd" containerID="cri-o://6583c4976503bdfed89cf7d8378099dd6800f2faf84c7bd35b70847568ea36ea" gracePeriod=30 Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.171517 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="sg-core" containerID="cri-o://e58df098c1b1e0b6310c3f4a5769c42acc744031045f8eaf095ce466e8fe99fd" gracePeriod=30 Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.171564 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="ceilometer-notification-agent" containerID="cri-o://5e599a4629186f759009d26736dadd4aaf4a70970f46432218fa437349d9752c" gracePeriod=30 Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.171630 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.171836 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.201992 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.083486672 podStartE2EDuration="11.201972303s" podCreationTimestamp="2025-11-21 13:59:31 +0000 UTC" firstStartedPulling="2025-11-21 13:59:33.576640247 +0000 UTC m=+1650.303054974" lastFinishedPulling="2025-11-21 13:59:41.695125878 +0000 UTC m=+1658.421540605" observedRunningTime="2025-11-21 13:59:42.196832645 +0000 UTC m=+1658.923247402" watchObservedRunningTime="2025-11-21 13:59:42.201972303 +0000 UTC m=+1658.928387030" Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.917143 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:42 crc kubenswrapper[4675]: I1121 13:59:42.918196 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.088768 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.089086 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.153399 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.200459 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.228406 4675 generic.go:334] "Generic (PLEG): container finished" podID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerID="e58df098c1b1e0b6310c3f4a5769c42acc744031045f8eaf095ce466e8fe99fd" exitCode=2 Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.228436 4675 generic.go:334] "Generic (PLEG): container finished" podID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerID="5e599a4629186f759009d26736dadd4aaf4a70970f46432218fa437349d9752c" exitCode=0 Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.229953 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7aab9c0-e5af-48f5-895a-1e560b3ddb35","Type":"ContainerDied","Data":"e58df098c1b1e0b6310c3f4a5769c42acc744031045f8eaf095ce466e8fe99fd"} Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.229983 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7aab9c0-e5af-48f5-895a-1e560b3ddb35","Type":"ContainerDied","Data":"5e599a4629186f759009d26736dadd4aaf4a70970f46432218fa437349d9752c"} Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.230440 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.230457 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.327699 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.494595 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-79c7749954-ksq5g"] Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.494892 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-79c7749954-ksq5g" podUID="3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2" containerName="heat-engine" containerID="cri-o://9ffebf05519607587377af6db5a1c610d349afb105e58b568650af6415061466" gracePeriod=60 Nov 21 13:59:43 crc kubenswrapper[4675]: E1121 13:59:43.522154 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7aab9c0_e5af_48f5_895a_1e560b3ddb35.slice/crio-conmon-9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7aab9c0_e5af_48f5_895a_1e560b3ddb35.slice/crio-9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-conmon-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.695675 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.705645 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.765182 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-864b497b7c-2zlp2"] Nov 21 13:59:43 crc kubenswrapper[4675]: I1121 13:59:43.855404 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5856d7f7bf-rsjh7"] Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.255381 4675 generic.go:334] "Generic (PLEG): container finished" podID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerID="9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252" exitCode=0 Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.256683 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7aab9c0-e5af-48f5-895a-1e560b3ddb35","Type":"ContainerDied","Data":"9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252"} Nov 21 13:59:44 crc kubenswrapper[4675]: E1121 13:59:44.351236 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9ffebf05519607587377af6db5a1c610d349afb105e58b568650af6415061466" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 13:59:44 crc kubenswrapper[4675]: E1121 13:59:44.359298 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9ffebf05519607587377af6db5a1c610d349afb105e58b568650af6415061466" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 13:59:44 crc kubenswrapper[4675]: E1121 13:59:44.374043 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9ffebf05519607587377af6db5a1c610d349afb105e58b568650af6415061466" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 13:59:44 crc kubenswrapper[4675]: E1121 13:59:44.374107 4675 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-79c7749954-ksq5g" podUID="3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2" containerName="heat-engine" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.530140 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.537869 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.721182 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-config-data\") pod \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.721553 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr6cs\" (UniqueName: \"kubernetes.io/projected/6e39ea81-bdee-475a-87ea-5fbd7c02759f-kube-api-access-cr6cs\") pod \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.721621 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-combined-ca-bundle\") pod \"979b6714-8706-4ab6-bd6e-dd127eed8347\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.721760 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-config-data-custom\") pod \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.721820 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24wpl\" (UniqueName: \"kubernetes.io/projected/979b6714-8706-4ab6-bd6e-dd127eed8347-kube-api-access-24wpl\") pod \"979b6714-8706-4ab6-bd6e-dd127eed8347\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.721874 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-combined-ca-bundle\") pod \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\" (UID: \"6e39ea81-bdee-475a-87ea-5fbd7c02759f\") " Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.721904 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-config-data\") pod \"979b6714-8706-4ab6-bd6e-dd127eed8347\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.721953 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-config-data-custom\") pod \"979b6714-8706-4ab6-bd6e-dd127eed8347\" (UID: \"979b6714-8706-4ab6-bd6e-dd127eed8347\") " Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.733965 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e39ea81-bdee-475a-87ea-5fbd7c02759f-kube-api-access-cr6cs" (OuterVolumeSpecName: "kube-api-access-cr6cs") pod "6e39ea81-bdee-475a-87ea-5fbd7c02759f" (UID: "6e39ea81-bdee-475a-87ea-5fbd7c02759f"). InnerVolumeSpecName "kube-api-access-cr6cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.742934 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/979b6714-8706-4ab6-bd6e-dd127eed8347-kube-api-access-24wpl" (OuterVolumeSpecName: "kube-api-access-24wpl") pod "979b6714-8706-4ab6-bd6e-dd127eed8347" (UID: "979b6714-8706-4ab6-bd6e-dd127eed8347"). InnerVolumeSpecName "kube-api-access-24wpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.743145 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6e39ea81-bdee-475a-87ea-5fbd7c02759f" (UID: "6e39ea81-bdee-475a-87ea-5fbd7c02759f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.745405 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "979b6714-8706-4ab6-bd6e-dd127eed8347" (UID: "979b6714-8706-4ab6-bd6e-dd127eed8347"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.778199 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e39ea81-bdee-475a-87ea-5fbd7c02759f" (UID: "6e39ea81-bdee-475a-87ea-5fbd7c02759f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.824512 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "979b6714-8706-4ab6-bd6e-dd127eed8347" (UID: "979b6714-8706-4ab6-bd6e-dd127eed8347"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.826413 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr6cs\" (UniqueName: \"kubernetes.io/projected/6e39ea81-bdee-475a-87ea-5fbd7c02759f-kube-api-access-cr6cs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.827952 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.827988 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.828002 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24wpl\" (UniqueName: \"kubernetes.io/projected/979b6714-8706-4ab6-bd6e-dd127eed8347-kube-api-access-24wpl\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.828020 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.828032 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.828455 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-config-data" (OuterVolumeSpecName: "config-data") pod "979b6714-8706-4ab6-bd6e-dd127eed8347" (UID: "979b6714-8706-4ab6-bd6e-dd127eed8347"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.835204 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-config-data" (OuterVolumeSpecName: "config-data") pod "6e39ea81-bdee-475a-87ea-5fbd7c02759f" (UID: "6e39ea81-bdee-475a-87ea-5fbd7c02759f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.931109 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979b6714-8706-4ab6-bd6e-dd127eed8347-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:44 crc kubenswrapper[4675]: I1121 13:59:44.931151 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e39ea81-bdee-475a-87ea-5fbd7c02759f-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:45 crc kubenswrapper[4675]: E1121 13:59:45.097807 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-conmon-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.162137 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rxmhb"] Nov 21 13:59:45 crc kubenswrapper[4675]: E1121 13:59:45.162656 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979b6714-8706-4ab6-bd6e-dd127eed8347" containerName="heat-api" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.162674 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="979b6714-8706-4ab6-bd6e-dd127eed8347" containerName="heat-api" Nov 21 13:59:45 crc kubenswrapper[4675]: E1121 13:59:45.162685 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ece073-0d12-40a6-a6c6-8f40cbc5268f" containerName="mariadb-account-create" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.162692 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ece073-0d12-40a6-a6c6-8f40cbc5268f" containerName="mariadb-account-create" Nov 21 13:59:45 crc kubenswrapper[4675]: E1121 13:59:45.162713 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da354635-a1e7-4632-90d1-7d0cc2dded63" containerName="mariadb-database-create" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.162720 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="da354635-a1e7-4632-90d1-7d0cc2dded63" containerName="mariadb-database-create" Nov 21 13:59:45 crc kubenswrapper[4675]: E1121 13:59:45.162733 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e39ea81-bdee-475a-87ea-5fbd7c02759f" containerName="heat-cfnapi" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.162740 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e39ea81-bdee-475a-87ea-5fbd7c02759f" containerName="heat-cfnapi" Nov 21 13:59:45 crc kubenswrapper[4675]: E1121 13:59:45.162754 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e39ea81-bdee-475a-87ea-5fbd7c02759f" containerName="heat-cfnapi" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.162759 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e39ea81-bdee-475a-87ea-5fbd7c02759f" containerName="heat-cfnapi" Nov 21 13:59:45 crc kubenswrapper[4675]: E1121 13:59:45.162775 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8bf22f5-333f-43c5-9666-86ffa5657944" containerName="mariadb-account-create" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.162781 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8bf22f5-333f-43c5-9666-86ffa5657944" containerName="mariadb-account-create" Nov 21 13:59:45 crc kubenswrapper[4675]: E1121 13:59:45.162806 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ae1285-2384-4da2-803f-9395625e88de" containerName="mariadb-database-create" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.162813 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ae1285-2384-4da2-803f-9395625e88de" containerName="mariadb-database-create" Nov 21 13:59:45 crc kubenswrapper[4675]: E1121 13:59:45.162824 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21f160c6-8942-4e0a-bf07-6c57e7d69175" containerName="mariadb-account-create" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.162830 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f160c6-8942-4e0a-bf07-6c57e7d69175" containerName="mariadb-account-create" Nov 21 13:59:45 crc kubenswrapper[4675]: E1121 13:59:45.162841 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028aa8ea-8e30-4ec4-8280-59935c9cf343" containerName="mariadb-database-create" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.162847 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="028aa8ea-8e30-4ec4-8280-59935c9cf343" containerName="mariadb-database-create" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.163040 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="979b6714-8706-4ab6-bd6e-dd127eed8347" containerName="heat-api" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.163051 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="21f160c6-8942-4e0a-bf07-6c57e7d69175" containerName="mariadb-account-create" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.163060 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="68ae1285-2384-4da2-803f-9395625e88de" containerName="mariadb-database-create" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.163084 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="979b6714-8706-4ab6-bd6e-dd127eed8347" containerName="heat-api" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.163091 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="da354635-a1e7-4632-90d1-7d0cc2dded63" containerName="mariadb-database-create" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.163104 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8bf22f5-333f-43c5-9666-86ffa5657944" containerName="mariadb-account-create" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.163121 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e39ea81-bdee-475a-87ea-5fbd7c02759f" containerName="heat-cfnapi" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.163128 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="028aa8ea-8e30-4ec4-8280-59935c9cf343" containerName="mariadb-database-create" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.163141 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ece073-0d12-40a6-a6c6-8f40cbc5268f" containerName="mariadb-account-create" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.163918 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.170514 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.170582 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.170835 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xh9wr" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.185611 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rxmhb"] Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.236774 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rxmhb\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.237127 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-config-data\") pod \"nova-cell0-conductor-db-sync-rxmhb\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.237193 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6gj\" (UniqueName: \"kubernetes.io/projected/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-kube-api-access-nj6gj\") pod \"nova-cell0-conductor-db-sync-rxmhb\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.237227 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-scripts\") pod \"nova-cell0-conductor-db-sync-rxmhb\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.266627 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-864b497b7c-2zlp2" event={"ID":"979b6714-8706-4ab6-bd6e-dd127eed8347","Type":"ContainerDied","Data":"4a6590bf441faa059f7c84af3f57775400dcbd795fff706539a67948ab9c0ac5"} Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.266675 4675 scope.go:117] "RemoveContainer" containerID="37d604d0466361c4ddd643514f3cc00ce019cc0430961f1a085ae3e341a11378" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.266760 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-864b497b7c-2zlp2" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.270053 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.270089 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.270890 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.271337 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5856d7f7bf-rsjh7" event={"ID":"6e39ea81-bdee-475a-87ea-5fbd7c02759f","Type":"ContainerDied","Data":"d8ede6431d28a684f7db11545895465314723a23c497b347c46c127fe36c43c5"} Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.298250 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5856d7f7bf-rsjh7"] Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.309363 4675 scope.go:117] "RemoveContainer" containerID="f7821cd6d326a53f10d48975a36b2eb5f10a6257ba7325bef059de9fdf1ad47d" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.319595 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5856d7f7bf-rsjh7"] Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.337158 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-864b497b7c-2zlp2"] Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.344978 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-config-data\") pod \"nova-cell0-conductor-db-sync-rxmhb\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.345204 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj6gj\" (UniqueName: \"kubernetes.io/projected/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-kube-api-access-nj6gj\") pod \"nova-cell0-conductor-db-sync-rxmhb\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.345295 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-scripts\") pod \"nova-cell0-conductor-db-sync-rxmhb\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.345827 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rxmhb\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.349963 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-scripts\") pod \"nova-cell0-conductor-db-sync-rxmhb\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.350199 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-config-data\") pod \"nova-cell0-conductor-db-sync-rxmhb\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.350552 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rxmhb\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.370987 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-864b497b7c-2zlp2"] Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.373605 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj6gj\" (UniqueName: \"kubernetes.io/projected/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-kube-api-access-nj6gj\") pod \"nova-cell0-conductor-db-sync-rxmhb\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 13:59:45 crc kubenswrapper[4675]: I1121 13:59:45.481529 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 13:59:46 crc kubenswrapper[4675]: I1121 13:59:46.404822 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rxmhb"] Nov 21 13:59:46 crc kubenswrapper[4675]: W1121 13:59:46.411234 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc28aa6ba_e1e4_45ef_8c5e_33a3263103ff.slice/crio-827ad82647b370a169fe7b49df28c44c80bcf8973c18d9d73f3f17513d171905 WatchSource:0}: Error finding container 827ad82647b370a169fe7b49df28c44c80bcf8973c18d9d73f3f17513d171905: Status 404 returned error can't find the container with id 827ad82647b370a169fe7b49df28c44c80bcf8973c18d9d73f3f17513d171905 Nov 21 13:59:46 crc kubenswrapper[4675]: I1121 13:59:46.869499 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e39ea81-bdee-475a-87ea-5fbd7c02759f" path="/var/lib/kubelet/pods/6e39ea81-bdee-475a-87ea-5fbd7c02759f/volumes" Nov 21 13:59:46 crc kubenswrapper[4675]: I1121 13:59:46.871554 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="979b6714-8706-4ab6-bd6e-dd127eed8347" path="/var/lib/kubelet/pods/979b6714-8706-4ab6-bd6e-dd127eed8347/volumes" Nov 21 13:59:47 crc kubenswrapper[4675]: I1121 13:59:47.352207 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rxmhb" event={"ID":"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff","Type":"ContainerStarted","Data":"827ad82647b370a169fe7b49df28c44c80bcf8973c18d9d73f3f17513d171905"} Nov 21 13:59:47 crc kubenswrapper[4675]: I1121 13:59:47.691522 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 13:59:47 crc kubenswrapper[4675]: I1121 13:59:47.691643 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:59:47 crc kubenswrapper[4675]: I1121 13:59:47.695947 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 13:59:47 crc kubenswrapper[4675]: I1121 13:59:47.696358 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 13:59:47 crc kubenswrapper[4675]: I1121 13:59:47.740185 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 13:59:48 crc kubenswrapper[4675]: E1121 13:59:48.154463 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-conmon-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:59:48 crc kubenswrapper[4675]: E1121 13:59:48.155346 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-conmon-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.272732 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.408028 4675 generic.go:334] "Generic (PLEG): container finished" podID="3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2" containerID="9ffebf05519607587377af6db5a1c610d349afb105e58b568650af6415061466" exitCode=0 Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.408840 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79c7749954-ksq5g" event={"ID":"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2","Type":"ContainerDied","Data":"9ffebf05519607587377af6db5a1c610d349afb105e58b568650af6415061466"} Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.616627 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.820305 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-config-data-custom\") pod \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.820491 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-combined-ca-bundle\") pod \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.820578 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-config-data\") pod \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.820617 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7sqs\" (UniqueName: \"kubernetes.io/projected/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-kube-api-access-h7sqs\") pod \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\" (UID: \"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2\") " Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.844527 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2" (UID: "3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.860870 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-kube-api-access-h7sqs" (OuterVolumeSpecName: "kube-api-access-h7sqs") pod "3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2" (UID: "3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2"). InnerVolumeSpecName "kube-api-access-h7sqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.885227 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2" (UID: "3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.925625 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.925655 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.925667 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7sqs\" (UniqueName: \"kubernetes.io/projected/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-kube-api-access-h7sqs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:49 crc kubenswrapper[4675]: I1121 13:59:49.994242 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-config-data" (OuterVolumeSpecName: "config-data") pod "3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2" (UID: "3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:50 crc kubenswrapper[4675]: I1121 13:59:50.030985 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:50 crc kubenswrapper[4675]: I1121 13:59:50.426684 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79c7749954-ksq5g" event={"ID":"3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2","Type":"ContainerDied","Data":"d19c47947a733e918420c099d8765ac71233f205c6bccf1f3200f9a1ee82c396"} Nov 21 13:59:50 crc kubenswrapper[4675]: I1121 13:59:50.426737 4675 scope.go:117] "RemoveContainer" containerID="9ffebf05519607587377af6db5a1c610d349afb105e58b568650af6415061466" Nov 21 13:59:50 crc kubenswrapper[4675]: I1121 13:59:50.426771 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79c7749954-ksq5g" Nov 21 13:59:50 crc kubenswrapper[4675]: I1121 13:59:50.468123 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-79c7749954-ksq5g"] Nov 21 13:59:50 crc kubenswrapper[4675]: I1121 13:59:50.480738 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-79c7749954-ksq5g"] Nov 21 13:59:50 crc kubenswrapper[4675]: I1121 13:59:50.867959 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2" path="/var/lib/kubelet/pods/3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2/volumes" Nov 21 13:59:52 crc kubenswrapper[4675]: I1121 13:59:52.849461 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 13:59:52 crc kubenswrapper[4675]: E1121 13:59:52.850106 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 13:59:55 crc kubenswrapper[4675]: E1121 13:59:55.436715 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-conmon-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:59:55 crc kubenswrapper[4675]: I1121 13:59:55.927323 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="64cafc2c-04de-4090-9026-2b986fcae86a" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.220:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 13:59:58 crc kubenswrapper[4675]: E1121 13:59:58.065900 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78391e3d_3bab_469a_a163_3729fdf23773.slice/crio-conmon-09c34121b2e338bdc8acd5d1dbc58e7faae8ab897fd9fced384c6828770d07b7.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:59:59 crc kubenswrapper[4675]: I1121 13:59:59.532643 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rxmhb" event={"ID":"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff","Type":"ContainerStarted","Data":"2b3ad9a08a4336978237f32addab7202887b67bbfc8821d932ccea44b99bd1f7"} Nov 21 13:59:59 crc kubenswrapper[4675]: I1121 13:59:59.548542 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rxmhb" podStartSLOduration=2.029440764 podStartE2EDuration="14.548523773s" podCreationTimestamp="2025-11-21 13:59:45 +0000 UTC" firstStartedPulling="2025-11-21 13:59:46.414630063 +0000 UTC m=+1663.141044790" lastFinishedPulling="2025-11-21 13:59:58.933713072 +0000 UTC m=+1675.660127799" observedRunningTime="2025-11-21 13:59:59.547294672 +0000 UTC m=+1676.273709399" watchObservedRunningTime="2025-11-21 13:59:59.548523773 +0000 UTC m=+1676.274938500" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.139730 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v"] Nov 21 14:00:00 crc kubenswrapper[4675]: E1121 14:00:00.140688 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2" containerName="heat-engine" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.140714 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2" containerName="heat-engine" Nov 21 14:00:00 crc kubenswrapper[4675]: E1121 14:00:00.140727 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979b6714-8706-4ab6-bd6e-dd127eed8347" containerName="heat-api" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.140736 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="979b6714-8706-4ab6-bd6e-dd127eed8347" containerName="heat-api" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.141025 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e39ea81-bdee-475a-87ea-5fbd7c02759f" containerName="heat-cfnapi" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.141060 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a3a6cb1-0481-41d4-9b07-cf568e6a4ba2" containerName="heat-engine" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.142026 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.148010 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.148199 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.158962 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v"] Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.165392 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01a79992-9dba-48d6-96b3-ceeebc63cedd-config-volume\") pod \"collect-profiles-29395560-flp8v\" (UID: \"01a79992-9dba-48d6-96b3-ceeebc63cedd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.165804 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01a79992-9dba-48d6-96b3-ceeebc63cedd-secret-volume\") pod \"collect-profiles-29395560-flp8v\" (UID: \"01a79992-9dba-48d6-96b3-ceeebc63cedd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.166144 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf85k\" (UniqueName: \"kubernetes.io/projected/01a79992-9dba-48d6-96b3-ceeebc63cedd-kube-api-access-lf85k\") pod \"collect-profiles-29395560-flp8v\" (UID: \"01a79992-9dba-48d6-96b3-ceeebc63cedd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.268156 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf85k\" (UniqueName: \"kubernetes.io/projected/01a79992-9dba-48d6-96b3-ceeebc63cedd-kube-api-access-lf85k\") pod \"collect-profiles-29395560-flp8v\" (UID: \"01a79992-9dba-48d6-96b3-ceeebc63cedd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.268560 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01a79992-9dba-48d6-96b3-ceeebc63cedd-config-volume\") pod \"collect-profiles-29395560-flp8v\" (UID: \"01a79992-9dba-48d6-96b3-ceeebc63cedd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.268922 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01a79992-9dba-48d6-96b3-ceeebc63cedd-secret-volume\") pod \"collect-profiles-29395560-flp8v\" (UID: \"01a79992-9dba-48d6-96b3-ceeebc63cedd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.269820 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01a79992-9dba-48d6-96b3-ceeebc63cedd-config-volume\") pod \"collect-profiles-29395560-flp8v\" (UID: \"01a79992-9dba-48d6-96b3-ceeebc63cedd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.276783 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01a79992-9dba-48d6-96b3-ceeebc63cedd-secret-volume\") pod \"collect-profiles-29395560-flp8v\" (UID: \"01a79992-9dba-48d6-96b3-ceeebc63cedd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.288567 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf85k\" (UniqueName: \"kubernetes.io/projected/01a79992-9dba-48d6-96b3-ceeebc63cedd-kube-api-access-lf85k\") pod \"collect-profiles-29395560-flp8v\" (UID: \"01a79992-9dba-48d6-96b3-ceeebc63cedd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" Nov 21 14:00:00 crc kubenswrapper[4675]: I1121 14:00:00.462201 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" Nov 21 14:00:01 crc kubenswrapper[4675]: I1121 14:00:01.107954 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v"] Nov 21 14:00:01 crc kubenswrapper[4675]: I1121 14:00:01.570296 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" event={"ID":"01a79992-9dba-48d6-96b3-ceeebc63cedd","Type":"ContainerStarted","Data":"5e7048f2f96279f541c39a0f73a82d035c4ddb920b98f74ab814fdc015ba0c35"} Nov 21 14:00:01 crc kubenswrapper[4675]: I1121 14:00:01.570734 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" event={"ID":"01a79992-9dba-48d6-96b3-ceeebc63cedd","Type":"ContainerStarted","Data":"4f961a71d509a4ada1cc309f135f005af7c7f25be7836201fe818fe09d54c175"} Nov 21 14:00:01 crc kubenswrapper[4675]: I1121 14:00:01.597967 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" podStartSLOduration=1.597944462 podStartE2EDuration="1.597944462s" podCreationTimestamp="2025-11-21 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:01.589523911 +0000 UTC m=+1678.315938638" watchObservedRunningTime="2025-11-21 14:00:01.597944462 +0000 UTC m=+1678.324359189" Nov 21 14:00:02 crc kubenswrapper[4675]: I1121 14:00:02.400734 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 14:00:02 crc kubenswrapper[4675]: I1121 14:00:02.585504 4675 generic.go:334] "Generic (PLEG): container finished" podID="01a79992-9dba-48d6-96b3-ceeebc63cedd" containerID="5e7048f2f96279f541c39a0f73a82d035c4ddb920b98f74ab814fdc015ba0c35" exitCode=0 Nov 21 14:00:02 crc kubenswrapper[4675]: I1121 14:00:02.585553 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" event={"ID":"01a79992-9dba-48d6-96b3-ceeebc63cedd","Type":"ContainerDied","Data":"5e7048f2f96279f541c39a0f73a82d035c4ddb920b98f74ab814fdc015ba0c35"} Nov 21 14:00:04 crc kubenswrapper[4675]: I1121 14:00:04.145227 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" Nov 21 14:00:04 crc kubenswrapper[4675]: I1121 14:00:04.280620 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf85k\" (UniqueName: \"kubernetes.io/projected/01a79992-9dba-48d6-96b3-ceeebc63cedd-kube-api-access-lf85k\") pod \"01a79992-9dba-48d6-96b3-ceeebc63cedd\" (UID: \"01a79992-9dba-48d6-96b3-ceeebc63cedd\") " Nov 21 14:00:04 crc kubenswrapper[4675]: I1121 14:00:04.281080 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01a79992-9dba-48d6-96b3-ceeebc63cedd-config-volume\") pod \"01a79992-9dba-48d6-96b3-ceeebc63cedd\" (UID: \"01a79992-9dba-48d6-96b3-ceeebc63cedd\") " Nov 21 14:00:04 crc kubenswrapper[4675]: I1121 14:00:04.281108 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01a79992-9dba-48d6-96b3-ceeebc63cedd-secret-volume\") pod \"01a79992-9dba-48d6-96b3-ceeebc63cedd\" (UID: \"01a79992-9dba-48d6-96b3-ceeebc63cedd\") " Nov 21 14:00:04 crc kubenswrapper[4675]: I1121 14:00:04.282927 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a79992-9dba-48d6-96b3-ceeebc63cedd-config-volume" (OuterVolumeSpecName: "config-volume") pod "01a79992-9dba-48d6-96b3-ceeebc63cedd" (UID: "01a79992-9dba-48d6-96b3-ceeebc63cedd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:04 crc kubenswrapper[4675]: I1121 14:00:04.289036 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a79992-9dba-48d6-96b3-ceeebc63cedd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "01a79992-9dba-48d6-96b3-ceeebc63cedd" (UID: "01a79992-9dba-48d6-96b3-ceeebc63cedd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:04 crc kubenswrapper[4675]: I1121 14:00:04.290603 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a79992-9dba-48d6-96b3-ceeebc63cedd-kube-api-access-lf85k" (OuterVolumeSpecName: "kube-api-access-lf85k") pod "01a79992-9dba-48d6-96b3-ceeebc63cedd" (UID: "01a79992-9dba-48d6-96b3-ceeebc63cedd"). InnerVolumeSpecName "kube-api-access-lf85k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:04 crc kubenswrapper[4675]: I1121 14:00:04.383679 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf85k\" (UniqueName: \"kubernetes.io/projected/01a79992-9dba-48d6-96b3-ceeebc63cedd-kube-api-access-lf85k\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:04 crc kubenswrapper[4675]: I1121 14:00:04.383713 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01a79992-9dba-48d6-96b3-ceeebc63cedd-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:04 crc kubenswrapper[4675]: I1121 14:00:04.383724 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01a79992-9dba-48d6-96b3-ceeebc63cedd-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:04 crc kubenswrapper[4675]: I1121 14:00:04.617139 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" event={"ID":"01a79992-9dba-48d6-96b3-ceeebc63cedd","Type":"ContainerDied","Data":"4f961a71d509a4ada1cc309f135f005af7c7f25be7836201fe818fe09d54c175"} Nov 21 14:00:04 crc kubenswrapper[4675]: I1121 14:00:04.617209 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f961a71d509a4ada1cc309f135f005af7c7f25be7836201fe818fe09d54c175" Nov 21 14:00:04 crc kubenswrapper[4675]: I1121 14:00:04.617320 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v" Nov 21 14:00:04 crc kubenswrapper[4675]: I1121 14:00:04.860875 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:00:04 crc kubenswrapper[4675]: E1121 14:00:04.863469 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.511314 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tmqj7"] Nov 21 14:00:11 crc kubenswrapper[4675]: E1121 14:00:11.512420 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a79992-9dba-48d6-96b3-ceeebc63cedd" containerName="collect-profiles" Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.512451 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a79992-9dba-48d6-96b3-ceeebc63cedd" containerName="collect-profiles" Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.512672 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a79992-9dba-48d6-96b3-ceeebc63cedd" containerName="collect-profiles" Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.514247 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.527187 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tmqj7"] Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.546914 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5mpj\" (UniqueName: \"kubernetes.io/projected/f0093acc-562a-48c9-b1d1-bde5cdb129be-kube-api-access-l5mpj\") pod \"redhat-operators-tmqj7\" (UID: \"f0093acc-562a-48c9-b1d1-bde5cdb129be\") " pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.546968 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0093acc-562a-48c9-b1d1-bde5cdb129be-utilities\") pod \"redhat-operators-tmqj7\" (UID: \"f0093acc-562a-48c9-b1d1-bde5cdb129be\") " pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.547104 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0093acc-562a-48c9-b1d1-bde5cdb129be-catalog-content\") pod \"redhat-operators-tmqj7\" (UID: \"f0093acc-562a-48c9-b1d1-bde5cdb129be\") " pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.649746 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5mpj\" (UniqueName: \"kubernetes.io/projected/f0093acc-562a-48c9-b1d1-bde5cdb129be-kube-api-access-l5mpj\") pod \"redhat-operators-tmqj7\" (UID: \"f0093acc-562a-48c9-b1d1-bde5cdb129be\") " pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.649802 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0093acc-562a-48c9-b1d1-bde5cdb129be-utilities\") pod \"redhat-operators-tmqj7\" (UID: \"f0093acc-562a-48c9-b1d1-bde5cdb129be\") " pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.649919 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0093acc-562a-48c9-b1d1-bde5cdb129be-catalog-content\") pod \"redhat-operators-tmqj7\" (UID: \"f0093acc-562a-48c9-b1d1-bde5cdb129be\") " pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.650549 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0093acc-562a-48c9-b1d1-bde5cdb129be-catalog-content\") pod \"redhat-operators-tmqj7\" (UID: \"f0093acc-562a-48c9-b1d1-bde5cdb129be\") " pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.650591 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0093acc-562a-48c9-b1d1-bde5cdb129be-utilities\") pod \"redhat-operators-tmqj7\" (UID: \"f0093acc-562a-48c9-b1d1-bde5cdb129be\") " pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.672035 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5mpj\" (UniqueName: \"kubernetes.io/projected/f0093acc-562a-48c9-b1d1-bde5cdb129be-kube-api-access-l5mpj\") pod \"redhat-operators-tmqj7\" (UID: \"f0093acc-562a-48c9-b1d1-bde5cdb129be\") " pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:11 crc kubenswrapper[4675]: I1121 14:00:11.851383 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.364959 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tmqj7"] Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.658446 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.721326 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmqj7" event={"ID":"f0093acc-562a-48c9-b1d1-bde5cdb129be","Type":"ContainerStarted","Data":"9b21a8bb4d2309b177f0d7cbc0c1a93e7f89c7759f1c7f0657522d70136e87d2"} Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.758033 4675 generic.go:334] "Generic (PLEG): container finished" podID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerID="6583c4976503bdfed89cf7d8378099dd6800f2faf84c7bd35b70847568ea36ea" exitCode=137 Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.758129 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7aab9c0-e5af-48f5-895a-1e560b3ddb35","Type":"ContainerDied","Data":"6583c4976503bdfed89cf7d8378099dd6800f2faf84c7bd35b70847568ea36ea"} Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.758165 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7aab9c0-e5af-48f5-895a-1e560b3ddb35","Type":"ContainerDied","Data":"9a0fb8f30f2e42cd9c7fbf8b3f9c6a11e59f354df32e0b2c6f76c24ff185f736"} Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.758186 4675 scope.go:117] "RemoveContainer" containerID="6583c4976503bdfed89cf7d8378099dd6800f2faf84c7bd35b70847568ea36ea" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.758387 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.779919 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-sg-core-conf-yaml\") pod \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.780312 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-log-httpd\") pod \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.780372 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-run-httpd\") pod \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.780476 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-config-data\") pod \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.780531 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-combined-ca-bundle\") pod \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.780600 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkjdf\" (UniqueName: \"kubernetes.io/projected/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-kube-api-access-kkjdf\") pod \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.780630 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-scripts\") pod \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\" (UID: \"a7aab9c0-e5af-48f5-895a-1e560b3ddb35\") " Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.782279 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a7aab9c0-e5af-48f5-895a-1e560b3ddb35" (UID: "a7aab9c0-e5af-48f5-895a-1e560b3ddb35"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.788217 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a7aab9c0-e5af-48f5-895a-1e560b3ddb35" (UID: "a7aab9c0-e5af-48f5-895a-1e560b3ddb35"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.791616 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-scripts" (OuterVolumeSpecName: "scripts") pod "a7aab9c0-e5af-48f5-895a-1e560b3ddb35" (UID: "a7aab9c0-e5af-48f5-895a-1e560b3ddb35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.792139 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-kube-api-access-kkjdf" (OuterVolumeSpecName: "kube-api-access-kkjdf") pod "a7aab9c0-e5af-48f5-895a-1e560b3ddb35" (UID: "a7aab9c0-e5af-48f5-895a-1e560b3ddb35"). InnerVolumeSpecName "kube-api-access-kkjdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.816257 4675 scope.go:117] "RemoveContainer" containerID="e58df098c1b1e0b6310c3f4a5769c42acc744031045f8eaf095ce466e8fe99fd" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.887187 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.887219 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.887232 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkjdf\" (UniqueName: \"kubernetes.io/projected/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-kube-api-access-kkjdf\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.887244 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.919949 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a7aab9c0-e5af-48f5-895a-1e560b3ddb35" (UID: "a7aab9c0-e5af-48f5-895a-1e560b3ddb35"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.976228 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7aab9c0-e5af-48f5-895a-1e560b3ddb35" (UID: "a7aab9c0-e5af-48f5-895a-1e560b3ddb35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.992841 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:12 crc kubenswrapper[4675]: I1121 14:00:12.992885 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.049018 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-config-data" (OuterVolumeSpecName: "config-data") pod "a7aab9c0-e5af-48f5-895a-1e560b3ddb35" (UID: "a7aab9c0-e5af-48f5-895a-1e560b3ddb35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.089901 4675 scope.go:117] "RemoveContainer" containerID="5e599a4629186f759009d26736dadd4aaf4a70970f46432218fa437349d9752c" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.095124 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aab9c0-e5af-48f5-895a-1e560b3ddb35-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.123664 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.137672 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.138365 4675 scope.go:117] "RemoveContainer" containerID="9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.182229 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:00:13 crc kubenswrapper[4675]: E1121 14:00:13.183488 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="sg-core" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.183514 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="sg-core" Nov 21 14:00:13 crc kubenswrapper[4675]: E1121 14:00:13.183612 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="ceilometer-central-agent" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.191181 4675 scope.go:117] "RemoveContainer" containerID="6583c4976503bdfed89cf7d8378099dd6800f2faf84c7bd35b70847568ea36ea" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.192016 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="ceilometer-central-agent" Nov 21 14:00:13 crc kubenswrapper[4675]: E1121 14:00:13.192134 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="proxy-httpd" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.192153 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="proxy-httpd" Nov 21 14:00:13 crc kubenswrapper[4675]: E1121 14:00:13.192215 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="ceilometer-notification-agent" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.192226 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="ceilometer-notification-agent" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.194296 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="ceilometer-notification-agent" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.194354 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="ceilometer-central-agent" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.194390 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="sg-core" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.194408 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" containerName="proxy-httpd" Nov 21 14:00:13 crc kubenswrapper[4675]: E1121 14:00:13.202828 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6583c4976503bdfed89cf7d8378099dd6800f2faf84c7bd35b70847568ea36ea\": container with ID starting with 6583c4976503bdfed89cf7d8378099dd6800f2faf84c7bd35b70847568ea36ea not found: ID does not exist" containerID="6583c4976503bdfed89cf7d8378099dd6800f2faf84c7bd35b70847568ea36ea" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.202914 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6583c4976503bdfed89cf7d8378099dd6800f2faf84c7bd35b70847568ea36ea"} err="failed to get container status \"6583c4976503bdfed89cf7d8378099dd6800f2faf84c7bd35b70847568ea36ea\": rpc error: code = NotFound desc = could not find container \"6583c4976503bdfed89cf7d8378099dd6800f2faf84c7bd35b70847568ea36ea\": container with ID starting with 6583c4976503bdfed89cf7d8378099dd6800f2faf84c7bd35b70847568ea36ea not found: ID does not exist" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.203119 4675 scope.go:117] "RemoveContainer" containerID="e58df098c1b1e0b6310c3f4a5769c42acc744031045f8eaf095ce466e8fe99fd" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.204010 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: E1121 14:00:13.204801 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e58df098c1b1e0b6310c3f4a5769c42acc744031045f8eaf095ce466e8fe99fd\": container with ID starting with e58df098c1b1e0b6310c3f4a5769c42acc744031045f8eaf095ce466e8fe99fd not found: ID does not exist" containerID="e58df098c1b1e0b6310c3f4a5769c42acc744031045f8eaf095ce466e8fe99fd" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.204853 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e58df098c1b1e0b6310c3f4a5769c42acc744031045f8eaf095ce466e8fe99fd"} err="failed to get container status \"e58df098c1b1e0b6310c3f4a5769c42acc744031045f8eaf095ce466e8fe99fd\": rpc error: code = NotFound desc = could not find container \"e58df098c1b1e0b6310c3f4a5769c42acc744031045f8eaf095ce466e8fe99fd\": container with ID starting with e58df098c1b1e0b6310c3f4a5769c42acc744031045f8eaf095ce466e8fe99fd not found: ID does not exist" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.204885 4675 scope.go:117] "RemoveContainer" containerID="5e599a4629186f759009d26736dadd4aaf4a70970f46432218fa437349d9752c" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.204972 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.207297 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.207743 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 14:00:13 crc kubenswrapper[4675]: E1121 14:00:13.209087 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e599a4629186f759009d26736dadd4aaf4a70970f46432218fa437349d9752c\": container with ID starting with 5e599a4629186f759009d26736dadd4aaf4a70970f46432218fa437349d9752c not found: ID does not exist" containerID="5e599a4629186f759009d26736dadd4aaf4a70970f46432218fa437349d9752c" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.209163 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e599a4629186f759009d26736dadd4aaf4a70970f46432218fa437349d9752c"} err="failed to get container status \"5e599a4629186f759009d26736dadd4aaf4a70970f46432218fa437349d9752c\": rpc error: code = NotFound desc = could not find container \"5e599a4629186f759009d26736dadd4aaf4a70970f46432218fa437349d9752c\": container with ID starting with 5e599a4629186f759009d26736dadd4aaf4a70970f46432218fa437349d9752c not found: ID does not exist" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.209207 4675 scope.go:117] "RemoveContainer" containerID="9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252" Nov 21 14:00:13 crc kubenswrapper[4675]: E1121 14:00:13.209714 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252\": container with ID starting with 9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252 not found: ID does not exist" containerID="9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.209749 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252"} err="failed to get container status \"9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252\": rpc error: code = NotFound desc = could not find container \"9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252\": container with ID starting with 9a611dc9b6c389fa0396bdbf3c87dafb905ad2e8cb80fd825ac629d552488252 not found: ID does not exist" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.308972 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44deb6cd-3059-4e60-b67c-2c3006654af7-log-httpd\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.309055 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz7gk\" (UniqueName: \"kubernetes.io/projected/44deb6cd-3059-4e60-b67c-2c3006654af7-kube-api-access-bz7gk\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.309145 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-config-data\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.309183 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-scripts\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.309243 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44deb6cd-3059-4e60-b67c-2c3006654af7-run-httpd\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.309284 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.309333 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.411151 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.411219 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.411304 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44deb6cd-3059-4e60-b67c-2c3006654af7-log-httpd\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.411347 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz7gk\" (UniqueName: \"kubernetes.io/projected/44deb6cd-3059-4e60-b67c-2c3006654af7-kube-api-access-bz7gk\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.411384 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-config-data\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.411409 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-scripts\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.411445 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44deb6cd-3059-4e60-b67c-2c3006654af7-run-httpd\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.412201 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44deb6cd-3059-4e60-b67c-2c3006654af7-run-httpd\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.412263 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44deb6cd-3059-4e60-b67c-2c3006654af7-log-httpd\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.414653 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.414959 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-scripts\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.417029 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-config-data\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.417951 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: E1121 14:00:13.425726 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7aab9c0_e5af_48f5_895a_1e560b3ddb35.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7aab9c0_e5af_48f5_895a_1e560b3ddb35.slice/crio-9a0fb8f30f2e42cd9c7fbf8b3f9c6a11e59f354df32e0b2c6f76c24ff185f736\": RecentStats: unable to find data in memory cache]" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.430842 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz7gk\" (UniqueName: \"kubernetes.io/projected/44deb6cd-3059-4e60-b67c-2c3006654af7-kube-api-access-bz7gk\") pod \"ceilometer-0\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.530032 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.773738 4675 generic.go:334] "Generic (PLEG): container finished" podID="f0093acc-562a-48c9-b1d1-bde5cdb129be" containerID="3c31855c9170a731e09bb92b617af5870d3d400c1f9b2d88a23de4383827fae1" exitCode=0 Nov 21 14:00:13 crc kubenswrapper[4675]: I1121 14:00:13.774043 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmqj7" event={"ID":"f0093acc-562a-48c9-b1d1-bde5cdb129be","Type":"ContainerDied","Data":"3c31855c9170a731e09bb92b617af5870d3d400c1f9b2d88a23de4383827fae1"} Nov 21 14:00:14 crc kubenswrapper[4675]: W1121 14:00:14.011294 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44deb6cd_3059_4e60_b67c_2c3006654af7.slice/crio-e559d6cecee8c3732d52368929863b424ac24fbb1a6c52970604f2132aea7258 WatchSource:0}: Error finding container e559d6cecee8c3732d52368929863b424ac24fbb1a6c52970604f2132aea7258: Status 404 returned error can't find the container with id e559d6cecee8c3732d52368929863b424ac24fbb1a6c52970604f2132aea7258 Nov 21 14:00:14 crc kubenswrapper[4675]: I1121 14:00:14.015243 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:00:14 crc kubenswrapper[4675]: I1121 14:00:14.786705 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44deb6cd-3059-4e60-b67c-2c3006654af7","Type":"ContainerStarted","Data":"e559d6cecee8c3732d52368929863b424ac24fbb1a6c52970604f2132aea7258"} Nov 21 14:00:14 crc kubenswrapper[4675]: I1121 14:00:14.863969 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7aab9c0-e5af-48f5-895a-1e560b3ddb35" path="/var/lib/kubelet/pods/a7aab9c0-e5af-48f5-895a-1e560b3ddb35/volumes" Nov 21 14:00:14 crc kubenswrapper[4675]: I1121 14:00:14.883790 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:00:15 crc kubenswrapper[4675]: I1121 14:00:15.801075 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmqj7" event={"ID":"f0093acc-562a-48c9-b1d1-bde5cdb129be","Type":"ContainerStarted","Data":"bfeabc15e4e90a3d745dd7f5d2691e90db7afe0d418fc0c5e7a5d29a09c89bd1"} Nov 21 14:00:15 crc kubenswrapper[4675]: I1121 14:00:15.803294 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44deb6cd-3059-4e60-b67c-2c3006654af7","Type":"ContainerStarted","Data":"f078f5bc7005b38b2ee940d9f90a37733a7fd56024f2df12e9a8fc1b6391284f"} Nov 21 14:00:16 crc kubenswrapper[4675]: I1121 14:00:16.821664 4675 generic.go:334] "Generic (PLEG): container finished" podID="c28aa6ba-e1e4-45ef-8c5e-33a3263103ff" containerID="2b3ad9a08a4336978237f32addab7202887b67bbfc8821d932ccea44b99bd1f7" exitCode=0 Nov 21 14:00:16 crc kubenswrapper[4675]: I1121 14:00:16.822111 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rxmhb" event={"ID":"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff","Type":"ContainerDied","Data":"2b3ad9a08a4336978237f32addab7202887b67bbfc8821d932ccea44b99bd1f7"} Nov 21 14:00:16 crc kubenswrapper[4675]: I1121 14:00:16.827345 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44deb6cd-3059-4e60-b67c-2c3006654af7","Type":"ContainerStarted","Data":"1f8b72684246796b1abcaea9630a29462b563e0ca1e955d021822cf0454cb450"} Nov 21 14:00:17 crc kubenswrapper[4675]: I1121 14:00:17.840291 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44deb6cd-3059-4e60-b67c-2c3006654af7","Type":"ContainerStarted","Data":"a66708bc3bc42d25cd23cbbaaab85bb1b20c87c0e184ce4d9167a265ad655240"} Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.329541 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.435040 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj6gj\" (UniqueName: \"kubernetes.io/projected/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-kube-api-access-nj6gj\") pod \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.435194 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-scripts\") pod \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.435321 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-combined-ca-bundle\") pod \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.435422 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-config-data\") pod \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\" (UID: \"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff\") " Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.440287 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-scripts" (OuterVolumeSpecName: "scripts") pod "c28aa6ba-e1e4-45ef-8c5e-33a3263103ff" (UID: "c28aa6ba-e1e4-45ef-8c5e-33a3263103ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.442314 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-kube-api-access-nj6gj" (OuterVolumeSpecName: "kube-api-access-nj6gj") pod "c28aa6ba-e1e4-45ef-8c5e-33a3263103ff" (UID: "c28aa6ba-e1e4-45ef-8c5e-33a3263103ff"). InnerVolumeSpecName "kube-api-access-nj6gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.485314 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-config-data" (OuterVolumeSpecName: "config-data") pod "c28aa6ba-e1e4-45ef-8c5e-33a3263103ff" (UID: "c28aa6ba-e1e4-45ef-8c5e-33a3263103ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.501306 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c28aa6ba-e1e4-45ef-8c5e-33a3263103ff" (UID: "c28aa6ba-e1e4-45ef-8c5e-33a3263103ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.538295 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj6gj\" (UniqueName: \"kubernetes.io/projected/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-kube-api-access-nj6gj\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.538329 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.538339 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.538349 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.854525 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rxmhb" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.871405 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rxmhb" event={"ID":"c28aa6ba-e1e4-45ef-8c5e-33a3263103ff","Type":"ContainerDied","Data":"827ad82647b370a169fe7b49df28c44c80bcf8973c18d9d73f3f17513d171905"} Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.871464 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="827ad82647b370a169fe7b49df28c44c80bcf8973c18d9d73f3f17513d171905" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.983941 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 14:00:18 crc kubenswrapper[4675]: E1121 14:00:18.984492 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c28aa6ba-e1e4-45ef-8c5e-33a3263103ff" containerName="nova-cell0-conductor-db-sync" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.984509 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c28aa6ba-e1e4-45ef-8c5e-33a3263103ff" containerName="nova-cell0-conductor-db-sync" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.984705 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c28aa6ba-e1e4-45ef-8c5e-33a3263103ff" containerName="nova-cell0-conductor-db-sync" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.985527 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.988460 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xh9wr" Nov 21 14:00:18 crc kubenswrapper[4675]: I1121 14:00:18.988529 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.024151 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.153272 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c40348-4d27-4b4c-9b8a-eac9b8b7252a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"18c40348-4d27-4b4c-9b8a-eac9b8b7252a\") " pod="openstack/nova-cell0-conductor-0" Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.153419 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csvv\" (UniqueName: \"kubernetes.io/projected/18c40348-4d27-4b4c-9b8a-eac9b8b7252a-kube-api-access-6csvv\") pod \"nova-cell0-conductor-0\" (UID: \"18c40348-4d27-4b4c-9b8a-eac9b8b7252a\") " pod="openstack/nova-cell0-conductor-0" Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.153451 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c40348-4d27-4b4c-9b8a-eac9b8b7252a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"18c40348-4d27-4b4c-9b8a-eac9b8b7252a\") " pod="openstack/nova-cell0-conductor-0" Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.255650 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csvv\" (UniqueName: \"kubernetes.io/projected/18c40348-4d27-4b4c-9b8a-eac9b8b7252a-kube-api-access-6csvv\") pod \"nova-cell0-conductor-0\" (UID: \"18c40348-4d27-4b4c-9b8a-eac9b8b7252a\") " pod="openstack/nova-cell0-conductor-0" Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.255987 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c40348-4d27-4b4c-9b8a-eac9b8b7252a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"18c40348-4d27-4b4c-9b8a-eac9b8b7252a\") " pod="openstack/nova-cell0-conductor-0" Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.256157 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c40348-4d27-4b4c-9b8a-eac9b8b7252a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"18c40348-4d27-4b4c-9b8a-eac9b8b7252a\") " pod="openstack/nova-cell0-conductor-0" Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.261431 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c40348-4d27-4b4c-9b8a-eac9b8b7252a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"18c40348-4d27-4b4c-9b8a-eac9b8b7252a\") " pod="openstack/nova-cell0-conductor-0" Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.262185 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c40348-4d27-4b4c-9b8a-eac9b8b7252a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"18c40348-4d27-4b4c-9b8a-eac9b8b7252a\") " pod="openstack/nova-cell0-conductor-0" Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.275089 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csvv\" (UniqueName: \"kubernetes.io/projected/18c40348-4d27-4b4c-9b8a-eac9b8b7252a-kube-api-access-6csvv\") pod \"nova-cell0-conductor-0\" (UID: \"18c40348-4d27-4b4c-9b8a-eac9b8b7252a\") " pod="openstack/nova-cell0-conductor-0" Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.316413 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.849595 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:00:19 crc kubenswrapper[4675]: E1121 14:00:19.850478 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.893127 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44deb6cd-3059-4e60-b67c-2c3006654af7","Type":"ContainerStarted","Data":"489dcbc613820df87ed0aa8a5664b9624d8cebcca91f2748182374a53eb58c10"} Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.893445 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="ceilometer-central-agent" containerID="cri-o://f078f5bc7005b38b2ee940d9f90a37733a7fd56024f2df12e9a8fc1b6391284f" gracePeriod=30 Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.893604 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.893634 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="proxy-httpd" containerID="cri-o://489dcbc613820df87ed0aa8a5664b9624d8cebcca91f2748182374a53eb58c10" gracePeriod=30 Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.893711 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="sg-core" containerID="cri-o://a66708bc3bc42d25cd23cbbaaab85bb1b20c87c0e184ce4d9167a265ad655240" gracePeriod=30 Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.893796 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="ceilometer-notification-agent" containerID="cri-o://1f8b72684246796b1abcaea9630a29462b563e0ca1e955d021822cf0454cb450" gracePeriod=30 Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.952552 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 14:00:19 crc kubenswrapper[4675]: I1121 14:00:19.965425 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.085675133 podStartE2EDuration="6.96540768s" podCreationTimestamp="2025-11-21 14:00:13 +0000 UTC" firstStartedPulling="2025-11-21 14:00:14.014295623 +0000 UTC m=+1690.740710350" lastFinishedPulling="2025-11-21 14:00:18.89402817 +0000 UTC m=+1695.620442897" observedRunningTime="2025-11-21 14:00:19.916687243 +0000 UTC m=+1696.643102010" watchObservedRunningTime="2025-11-21 14:00:19.96540768 +0000 UTC m=+1696.691822407" Nov 21 14:00:20 crc kubenswrapper[4675]: I1121 14:00:20.906211 4675 generic.go:334] "Generic (PLEG): container finished" podID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerID="a66708bc3bc42d25cd23cbbaaab85bb1b20c87c0e184ce4d9167a265ad655240" exitCode=2 Nov 21 14:00:20 crc kubenswrapper[4675]: I1121 14:00:20.906549 4675 generic.go:334] "Generic (PLEG): container finished" podID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerID="1f8b72684246796b1abcaea9630a29462b563e0ca1e955d021822cf0454cb450" exitCode=0 Nov 21 14:00:20 crc kubenswrapper[4675]: I1121 14:00:20.906291 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44deb6cd-3059-4e60-b67c-2c3006654af7","Type":"ContainerDied","Data":"a66708bc3bc42d25cd23cbbaaab85bb1b20c87c0e184ce4d9167a265ad655240"} Nov 21 14:00:20 crc kubenswrapper[4675]: I1121 14:00:20.906589 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44deb6cd-3059-4e60-b67c-2c3006654af7","Type":"ContainerDied","Data":"1f8b72684246796b1abcaea9630a29462b563e0ca1e955d021822cf0454cb450"} Nov 21 14:00:20 crc kubenswrapper[4675]: I1121 14:00:20.908612 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"18c40348-4d27-4b4c-9b8a-eac9b8b7252a","Type":"ContainerStarted","Data":"2cb2c63b4e4a4fb8744a690a95b32b1f998b62eda100d3fa334d759e7aaee3e6"} Nov 21 14:00:20 crc kubenswrapper[4675]: I1121 14:00:20.908661 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"18c40348-4d27-4b4c-9b8a-eac9b8b7252a","Type":"ContainerStarted","Data":"f608f21694cadf3f996b37eca97493c5bef61ffce0c5f5646cb48a013e4b4b3d"} Nov 21 14:00:20 crc kubenswrapper[4675]: I1121 14:00:20.908769 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 21 14:00:20 crc kubenswrapper[4675]: I1121 14:00:20.933824 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.933802166 podStartE2EDuration="2.933802166s" podCreationTimestamp="2025-11-21 14:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:20.924817212 +0000 UTC m=+1697.651231939" watchObservedRunningTime="2025-11-21 14:00:20.933802166 +0000 UTC m=+1697.660216893" Nov 21 14:00:23 crc kubenswrapper[4675]: I1121 14:00:23.946971 4675 generic.go:334] "Generic (PLEG): container finished" podID="f0093acc-562a-48c9-b1d1-bde5cdb129be" containerID="bfeabc15e4e90a3d745dd7f5d2691e90db7afe0d418fc0c5e7a5d29a09c89bd1" exitCode=0 Nov 21 14:00:23 crc kubenswrapper[4675]: I1121 14:00:23.947010 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmqj7" event={"ID":"f0093acc-562a-48c9-b1d1-bde5cdb129be","Type":"ContainerDied","Data":"bfeabc15e4e90a3d745dd7f5d2691e90db7afe0d418fc0c5e7a5d29a09c89bd1"} Nov 21 14:00:24 crc kubenswrapper[4675]: I1121 14:00:24.961833 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmqj7" event={"ID":"f0093acc-562a-48c9-b1d1-bde5cdb129be","Type":"ContainerStarted","Data":"e3e8b1016e7182f0f4e1c8afec733df12bc0bb34ad716a79cf3e97e35c4dd62f"} Nov 21 14:00:24 crc kubenswrapper[4675]: I1121 14:00:24.979244 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tmqj7" podStartSLOduration=3.424151114 podStartE2EDuration="13.979224167s" podCreationTimestamp="2025-11-21 14:00:11 +0000 UTC" firstStartedPulling="2025-11-21 14:00:13.776381498 +0000 UTC m=+1690.502796225" lastFinishedPulling="2025-11-21 14:00:24.331454551 +0000 UTC m=+1701.057869278" observedRunningTime="2025-11-21 14:00:24.978355705 +0000 UTC m=+1701.704770442" watchObservedRunningTime="2025-11-21 14:00:24.979224167 +0000 UTC m=+1701.705638894" Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.675551 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-j4w5c"] Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.677773 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-j4w5c" Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.688273 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-aaf6-account-create-4r6lg"] Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.690888 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-aaf6-account-create-4r6lg" Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.694086 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.699698 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-j4w5c"] Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.714893 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-aaf6-account-create-4r6lg"] Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.874455 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qxsh\" (UniqueName: \"kubernetes.io/projected/cb891c7f-db3b-4e11-a6dd-9bad582343a3-kube-api-access-4qxsh\") pod \"aodh-db-create-j4w5c\" (UID: \"cb891c7f-db3b-4e11-a6dd-9bad582343a3\") " pod="openstack/aodh-db-create-j4w5c" Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.875111 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d21d87-934d-4af7-b8ea-f0e58faa3a5f-operator-scripts\") pod \"aodh-aaf6-account-create-4r6lg\" (UID: \"d8d21d87-934d-4af7-b8ea-f0e58faa3a5f\") " pod="openstack/aodh-aaf6-account-create-4r6lg" Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.875375 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb891c7f-db3b-4e11-a6dd-9bad582343a3-operator-scripts\") pod \"aodh-db-create-j4w5c\" (UID: \"cb891c7f-db3b-4e11-a6dd-9bad582343a3\") " pod="openstack/aodh-db-create-j4w5c" Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.875574 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nvht\" (UniqueName: \"kubernetes.io/projected/d8d21d87-934d-4af7-b8ea-f0e58faa3a5f-kube-api-access-2nvht\") pod \"aodh-aaf6-account-create-4r6lg\" (UID: \"d8d21d87-934d-4af7-b8ea-f0e58faa3a5f\") " pod="openstack/aodh-aaf6-account-create-4r6lg" Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.977275 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d21d87-934d-4af7-b8ea-f0e58faa3a5f-operator-scripts\") pod \"aodh-aaf6-account-create-4r6lg\" (UID: \"d8d21d87-934d-4af7-b8ea-f0e58faa3a5f\") " pod="openstack/aodh-aaf6-account-create-4r6lg" Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.977378 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb891c7f-db3b-4e11-a6dd-9bad582343a3-operator-scripts\") pod \"aodh-db-create-j4w5c\" (UID: \"cb891c7f-db3b-4e11-a6dd-9bad582343a3\") " pod="openstack/aodh-db-create-j4w5c" Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.977416 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nvht\" (UniqueName: \"kubernetes.io/projected/d8d21d87-934d-4af7-b8ea-f0e58faa3a5f-kube-api-access-2nvht\") pod \"aodh-aaf6-account-create-4r6lg\" (UID: \"d8d21d87-934d-4af7-b8ea-f0e58faa3a5f\") " pod="openstack/aodh-aaf6-account-create-4r6lg" Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.977504 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qxsh\" (UniqueName: \"kubernetes.io/projected/cb891c7f-db3b-4e11-a6dd-9bad582343a3-kube-api-access-4qxsh\") pod \"aodh-db-create-j4w5c\" (UID: \"cb891c7f-db3b-4e11-a6dd-9bad582343a3\") " pod="openstack/aodh-db-create-j4w5c" Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.977915 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d21d87-934d-4af7-b8ea-f0e58faa3a5f-operator-scripts\") pod \"aodh-aaf6-account-create-4r6lg\" (UID: \"d8d21d87-934d-4af7-b8ea-f0e58faa3a5f\") " pod="openstack/aodh-aaf6-account-create-4r6lg" Nov 21 14:00:26 crc kubenswrapper[4675]: I1121 14:00:26.978615 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb891c7f-db3b-4e11-a6dd-9bad582343a3-operator-scripts\") pod \"aodh-db-create-j4w5c\" (UID: \"cb891c7f-db3b-4e11-a6dd-9bad582343a3\") " pod="openstack/aodh-db-create-j4w5c" Nov 21 14:00:27 crc kubenswrapper[4675]: I1121 14:00:27.003287 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qxsh\" (UniqueName: \"kubernetes.io/projected/cb891c7f-db3b-4e11-a6dd-9bad582343a3-kube-api-access-4qxsh\") pod \"aodh-db-create-j4w5c\" (UID: \"cb891c7f-db3b-4e11-a6dd-9bad582343a3\") " pod="openstack/aodh-db-create-j4w5c" Nov 21 14:00:27 crc kubenswrapper[4675]: I1121 14:00:27.003594 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nvht\" (UniqueName: \"kubernetes.io/projected/d8d21d87-934d-4af7-b8ea-f0e58faa3a5f-kube-api-access-2nvht\") pod \"aodh-aaf6-account-create-4r6lg\" (UID: \"d8d21d87-934d-4af7-b8ea-f0e58faa3a5f\") " pod="openstack/aodh-aaf6-account-create-4r6lg" Nov 21 14:00:27 crc kubenswrapper[4675]: I1121 14:00:27.015966 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-aaf6-account-create-4r6lg" Nov 21 14:00:27 crc kubenswrapper[4675]: I1121 14:00:27.303862 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-j4w5c" Nov 21 14:00:27 crc kubenswrapper[4675]: I1121 14:00:27.544721 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-aaf6-account-create-4r6lg"] Nov 21 14:00:27 crc kubenswrapper[4675]: I1121 14:00:27.839018 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-j4w5c"] Nov 21 14:00:27 crc kubenswrapper[4675]: W1121 14:00:27.842682 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb891c7f_db3b_4e11_a6dd_9bad582343a3.slice/crio-45a3c609e2b536fc7f7d17c33454cfb38474ec51840f8542917b209a6cd6592a WatchSource:0}: Error finding container 45a3c609e2b536fc7f7d17c33454cfb38474ec51840f8542917b209a6cd6592a: Status 404 returned error can't find the container with id 45a3c609e2b536fc7f7d17c33454cfb38474ec51840f8542917b209a6cd6592a Nov 21 14:00:27 crc kubenswrapper[4675]: I1121 14:00:27.997453 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-j4w5c" event={"ID":"cb891c7f-db3b-4e11-a6dd-9bad582343a3","Type":"ContainerStarted","Data":"45a3c609e2b536fc7f7d17c33454cfb38474ec51840f8542917b209a6cd6592a"} Nov 21 14:00:28 crc kubenswrapper[4675]: I1121 14:00:28.001655 4675 generic.go:334] "Generic (PLEG): container finished" podID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerID="f078f5bc7005b38b2ee940d9f90a37733a7fd56024f2df12e9a8fc1b6391284f" exitCode=0 Nov 21 14:00:28 crc kubenswrapper[4675]: I1121 14:00:28.001751 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44deb6cd-3059-4e60-b67c-2c3006654af7","Type":"ContainerDied","Data":"f078f5bc7005b38b2ee940d9f90a37733a7fd56024f2df12e9a8fc1b6391284f"} Nov 21 14:00:28 crc kubenswrapper[4675]: I1121 14:00:28.004287 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-aaf6-account-create-4r6lg" event={"ID":"d8d21d87-934d-4af7-b8ea-f0e58faa3a5f","Type":"ContainerStarted","Data":"bed4fe05f9df976bb44cc9146c6258506584d744d119239593a94078c90d369c"} Nov 21 14:00:28 crc kubenswrapper[4675]: I1121 14:00:28.004334 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-aaf6-account-create-4r6lg" event={"ID":"d8d21d87-934d-4af7-b8ea-f0e58faa3a5f","Type":"ContainerStarted","Data":"4d78f564a107fa55807f069951f5335e86c6dd48925a5eaefe39a6ea700f35f4"} Nov 21 14:00:28 crc kubenswrapper[4675]: I1121 14:00:28.024430 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-aaf6-account-create-4r6lg" podStartSLOduration=2.024411825 podStartE2EDuration="2.024411825s" podCreationTimestamp="2025-11-21 14:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:28.019881642 +0000 UTC m=+1704.746296369" watchObservedRunningTime="2025-11-21 14:00:28.024411825 +0000 UTC m=+1704.750826552" Nov 21 14:00:29 crc kubenswrapper[4675]: I1121 14:00:29.016854 4675 generic.go:334] "Generic (PLEG): container finished" podID="cb891c7f-db3b-4e11-a6dd-9bad582343a3" containerID="709a48b9984132ff6ea4d89e11100eac523f02c55b9c3fb72551a987728dba9f" exitCode=0 Nov 21 14:00:29 crc kubenswrapper[4675]: I1121 14:00:29.016909 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-j4w5c" event={"ID":"cb891c7f-db3b-4e11-a6dd-9bad582343a3","Type":"ContainerDied","Data":"709a48b9984132ff6ea4d89e11100eac523f02c55b9c3fb72551a987728dba9f"} Nov 21 14:00:29 crc kubenswrapper[4675]: I1121 14:00:29.019296 4675 generic.go:334] "Generic (PLEG): container finished" podID="d8d21d87-934d-4af7-b8ea-f0e58faa3a5f" containerID="bed4fe05f9df976bb44cc9146c6258506584d744d119239593a94078c90d369c" exitCode=0 Nov 21 14:00:29 crc kubenswrapper[4675]: I1121 14:00:29.019329 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-aaf6-account-create-4r6lg" event={"ID":"d8d21d87-934d-4af7-b8ea-f0e58faa3a5f","Type":"ContainerDied","Data":"bed4fe05f9df976bb44cc9146c6258506584d744d119239593a94078c90d369c"} Nov 21 14:00:29 crc kubenswrapper[4675]: I1121 14:00:29.346537 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 21 14:00:29 crc kubenswrapper[4675]: I1121 14:00:29.802516 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-6s6ct"] Nov 21 14:00:29 crc kubenswrapper[4675]: I1121 14:00:29.804947 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:29 crc kubenswrapper[4675]: I1121 14:00:29.809908 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 21 14:00:29 crc kubenswrapper[4675]: I1121 14:00:29.810506 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 21 14:00:29 crc kubenswrapper[4675]: I1121 14:00:29.812884 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6s6ct"] Nov 21 14:00:29 crc kubenswrapper[4675]: I1121 14:00:29.948529 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-scripts\") pod \"nova-cell0-cell-mapping-6s6ct\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:29 crc kubenswrapper[4675]: I1121 14:00:29.948642 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv7pb\" (UniqueName: \"kubernetes.io/projected/35624ff8-b298-4e69-a4d6-8dd5e3401b07-kube-api-access-hv7pb\") pod \"nova-cell0-cell-mapping-6s6ct\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:29 crc kubenswrapper[4675]: I1121 14:00:29.948698 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6s6ct\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:29 crc kubenswrapper[4675]: I1121 14:00:29.948775 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-config-data\") pod \"nova-cell0-cell-mapping-6s6ct\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.050429 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv7pb\" (UniqueName: \"kubernetes.io/projected/35624ff8-b298-4e69-a4d6-8dd5e3401b07-kube-api-access-hv7pb\") pod \"nova-cell0-cell-mapping-6s6ct\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.050492 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6s6ct\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.050544 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-config-data\") pod \"nova-cell0-cell-mapping-6s6ct\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.050683 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-scripts\") pod \"nova-cell0-cell-mapping-6s6ct\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.064533 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6s6ct\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.071174 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-scripts\") pod \"nova-cell0-cell-mapping-6s6ct\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.073035 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-config-data\") pod \"nova-cell0-cell-mapping-6s6ct\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.095991 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.103678 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.116018 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.132888 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv7pb\" (UniqueName: \"kubernetes.io/projected/35624ff8-b298-4e69-a4d6-8dd5e3401b07-kube-api-access-hv7pb\") pod \"nova-cell0-cell-mapping-6s6ct\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.143917 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.263488 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e64a54-c69c-4c4f-b0c6-01b6742785f7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.263564 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e64a54-c69c-4c4f-b0c6-01b6742785f7-config-data\") pod \"nova-scheduler-0\" (UID: \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.263653 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv7r8\" (UniqueName: \"kubernetes.io/projected/17e64a54-c69c-4c4f-b0c6-01b6742785f7-kube-api-access-mv7r8\") pod \"nova-scheduler-0\" (UID: \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.365317 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e64a54-c69c-4c4f-b0c6-01b6742785f7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.365384 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e64a54-c69c-4c4f-b0c6-01b6742785f7-config-data\") pod \"nova-scheduler-0\" (UID: \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.365476 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv7r8\" (UniqueName: \"kubernetes.io/projected/17e64a54-c69c-4c4f-b0c6-01b6742785f7-kube-api-access-mv7r8\") pod \"nova-scheduler-0\" (UID: \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.372991 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e64a54-c69c-4c4f-b0c6-01b6742785f7-config-data\") pod \"nova-scheduler-0\" (UID: \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.382354 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e64a54-c69c-4c4f-b0c6-01b6742785f7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.409835 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.412724 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.424443 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.431145 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.450803 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv7r8\" (UniqueName: \"kubernetes.io/projected/17e64a54-c69c-4c4f-b0c6-01b6742785f7-kube-api-access-mv7r8\") pod \"nova-scheduler-0\" (UID: \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.463931 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.465583 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.467312 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.484110 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.553704 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.554326 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.575670 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvgvg\" (UniqueName: \"kubernetes.io/projected/8f05482c-68ed-43cc-9762-29ac233f69d3-kube-api-access-dvgvg\") pod \"nova-metadata-0\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " pod="openstack/nova-metadata-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.575726 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61dba3cf-1cb1-4641-9435-2eac045c894e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"61dba3cf-1cb1-4641-9435-2eac045c894e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.575802 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljl6r\" (UniqueName: \"kubernetes.io/projected/61dba3cf-1cb1-4641-9435-2eac045c894e-kube-api-access-ljl6r\") pod \"nova-cell1-novncproxy-0\" (UID: \"61dba3cf-1cb1-4641-9435-2eac045c894e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.575901 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f05482c-68ed-43cc-9762-29ac233f69d3-config-data\") pod \"nova-metadata-0\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " pod="openstack/nova-metadata-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.575963 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f05482c-68ed-43cc-9762-29ac233f69d3-logs\") pod \"nova-metadata-0\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " pod="openstack/nova-metadata-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.576004 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61dba3cf-1cb1-4641-9435-2eac045c894e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"61dba3cf-1cb1-4641-9435-2eac045c894e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.576038 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f05482c-68ed-43cc-9762-29ac233f69d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " pod="openstack/nova-metadata-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.653981 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.656570 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.659438 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.681813 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f05482c-68ed-43cc-9762-29ac233f69d3-config-data\") pod \"nova-metadata-0\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " pod="openstack/nova-metadata-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.681917 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f05482c-68ed-43cc-9762-29ac233f69d3-logs\") pod \"nova-metadata-0\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " pod="openstack/nova-metadata-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.689177 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61dba3cf-1cb1-4641-9435-2eac045c894e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"61dba3cf-1cb1-4641-9435-2eac045c894e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.689259 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f05482c-68ed-43cc-9762-29ac233f69d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " pod="openstack/nova-metadata-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.689340 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvgvg\" (UniqueName: \"kubernetes.io/projected/8f05482c-68ed-43cc-9762-29ac233f69d3-kube-api-access-dvgvg\") pod \"nova-metadata-0\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " pod="openstack/nova-metadata-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.689342 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.689376 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61dba3cf-1cb1-4641-9435-2eac045c894e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"61dba3cf-1cb1-4641-9435-2eac045c894e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.689517 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljl6r\" (UniqueName: \"kubernetes.io/projected/61dba3cf-1cb1-4641-9435-2eac045c894e-kube-api-access-ljl6r\") pod \"nova-cell1-novncproxy-0\" (UID: \"61dba3cf-1cb1-4641-9435-2eac045c894e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.694482 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61dba3cf-1cb1-4641-9435-2eac045c894e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"61dba3cf-1cb1-4641-9435-2eac045c894e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.694814 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f05482c-68ed-43cc-9762-29ac233f69d3-logs\") pod \"nova-metadata-0\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " pod="openstack/nova-metadata-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.698079 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61dba3cf-1cb1-4641-9435-2eac045c894e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"61dba3cf-1cb1-4641-9435-2eac045c894e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.707967 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f05482c-68ed-43cc-9762-29ac233f69d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " pod="openstack/nova-metadata-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.724642 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvgvg\" (UniqueName: \"kubernetes.io/projected/8f05482c-68ed-43cc-9762-29ac233f69d3-kube-api-access-dvgvg\") pod \"nova-metadata-0\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " pod="openstack/nova-metadata-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.730007 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f05482c-68ed-43cc-9762-29ac233f69d3-config-data\") pod \"nova-metadata-0\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " pod="openstack/nova-metadata-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.732518 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-ksc5h"] Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.734604 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.740640 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljl6r\" (UniqueName: \"kubernetes.io/projected/61dba3cf-1cb1-4641-9435-2eac045c894e-kube-api-access-ljl6r\") pod \"nova-cell1-novncproxy-0\" (UID: \"61dba3cf-1cb1-4641-9435-2eac045c894e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.778117 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-ksc5h"] Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.797779 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1436ed81-90ae-4f3c-b854-539012fbf57e-logs\") pod \"nova-api-0\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " pod="openstack/nova-api-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.798123 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67phk\" (UniqueName: \"kubernetes.io/projected/1436ed81-90ae-4f3c-b854-539012fbf57e-kube-api-access-67phk\") pod \"nova-api-0\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " pod="openstack/nova-api-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.798218 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1436ed81-90ae-4f3c-b854-539012fbf57e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " pod="openstack/nova-api-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.798302 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1436ed81-90ae-4f3c-b854-539012fbf57e-config-data\") pod \"nova-api-0\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " pod="openstack/nova-api-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.837900 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.884540 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.894030 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-aaf6-account-create-4r6lg" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.900470 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.900945 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1436ed81-90ae-4f3c-b854-539012fbf57e-logs\") pod \"nova-api-0\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " pod="openstack/nova-api-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.901084 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.901201 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q4hv\" (UniqueName: \"kubernetes.io/projected/8d812b38-ac4b-4262-8642-bfe5c2b19222-kube-api-access-7q4hv\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.901320 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.901461 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.901575 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67phk\" (UniqueName: \"kubernetes.io/projected/1436ed81-90ae-4f3c-b854-539012fbf57e-kube-api-access-67phk\") pod \"nova-api-0\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " pod="openstack/nova-api-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.901674 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1436ed81-90ae-4f3c-b854-539012fbf57e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " pod="openstack/nova-api-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.901793 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1436ed81-90ae-4f3c-b854-539012fbf57e-config-data\") pod \"nova-api-0\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " pod="openstack/nova-api-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.901897 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-config\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.903640 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1436ed81-90ae-4f3c-b854-539012fbf57e-logs\") pod \"nova-api-0\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " pod="openstack/nova-api-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.924724 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1436ed81-90ae-4f3c-b854-539012fbf57e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " pod="openstack/nova-api-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.943681 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1436ed81-90ae-4f3c-b854-539012fbf57e-config-data\") pod \"nova-api-0\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " pod="openstack/nova-api-0" Nov 21 14:00:30 crc kubenswrapper[4675]: I1121 14:00:30.955411 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67phk\" (UniqueName: \"kubernetes.io/projected/1436ed81-90ae-4f3c-b854-539012fbf57e-kube-api-access-67phk\") pod \"nova-api-0\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " pod="openstack/nova-api-0" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.008613 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.008985 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nvht\" (UniqueName: \"kubernetes.io/projected/d8d21d87-934d-4af7-b8ea-f0e58faa3a5f-kube-api-access-2nvht\") pod \"d8d21d87-934d-4af7-b8ea-f0e58faa3a5f\" (UID: \"d8d21d87-934d-4af7-b8ea-f0e58faa3a5f\") " Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.009340 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d21d87-934d-4af7-b8ea-f0e58faa3a5f-operator-scripts\") pod \"d8d21d87-934d-4af7-b8ea-f0e58faa3a5f\" (UID: \"d8d21d87-934d-4af7-b8ea-f0e58faa3a5f\") " Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.009689 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.009770 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.009800 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q4hv\" (UniqueName: \"kubernetes.io/projected/8d812b38-ac4b-4262-8642-bfe5c2b19222-kube-api-access-7q4hv\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.009832 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.009883 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.009932 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-config\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.010689 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.011238 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d21d87-934d-4af7-b8ea-f0e58faa3a5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8d21d87-934d-4af7-b8ea-f0e58faa3a5f" (UID: "d8d21d87-934d-4af7-b8ea-f0e58faa3a5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.012682 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.016006 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.020390 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-config\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.020799 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.036397 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d21d87-934d-4af7-b8ea-f0e58faa3a5f-kube-api-access-2nvht" (OuterVolumeSpecName: "kube-api-access-2nvht") pod "d8d21d87-934d-4af7-b8ea-f0e58faa3a5f" (UID: "d8d21d87-934d-4af7-b8ea-f0e58faa3a5f"). InnerVolumeSpecName "kube-api-access-2nvht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.037460 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-j4w5c" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.041963 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q4hv\" (UniqueName: \"kubernetes.io/projected/8d812b38-ac4b-4262-8642-bfe5c2b19222-kube-api-access-7q4hv\") pod \"dnsmasq-dns-568d7fd7cf-ksc5h\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.055818 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-aaf6-account-create-4r6lg" event={"ID":"d8d21d87-934d-4af7-b8ea-f0e58faa3a5f","Type":"ContainerDied","Data":"4d78f564a107fa55807f069951f5335e86c6dd48925a5eaefe39a6ea700f35f4"} Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.057318 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d78f564a107fa55807f069951f5335e86c6dd48925a5eaefe39a6ea700f35f4" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.057530 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-aaf6-account-create-4r6lg" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.059685 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.079521 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-j4w5c" event={"ID":"cb891c7f-db3b-4e11-a6dd-9bad582343a3","Type":"ContainerDied","Data":"45a3c609e2b536fc7f7d17c33454cfb38474ec51840f8542917b209a6cd6592a"} Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.079554 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45a3c609e2b536fc7f7d17c33454cfb38474ec51840f8542917b209a6cd6592a" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.079623 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-j4w5c" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.113703 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d21d87-934d-4af7-b8ea-f0e58faa3a5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.113739 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nvht\" (UniqueName: \"kubernetes.io/projected/d8d21d87-934d-4af7-b8ea-f0e58faa3a5f-kube-api-access-2nvht\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.216233 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb891c7f-db3b-4e11-a6dd-9bad582343a3-operator-scripts\") pod \"cb891c7f-db3b-4e11-a6dd-9bad582343a3\" (UID: \"cb891c7f-db3b-4e11-a6dd-9bad582343a3\") " Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.216407 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qxsh\" (UniqueName: \"kubernetes.io/projected/cb891c7f-db3b-4e11-a6dd-9bad582343a3-kube-api-access-4qxsh\") pod \"cb891c7f-db3b-4e11-a6dd-9bad582343a3\" (UID: \"cb891c7f-db3b-4e11-a6dd-9bad582343a3\") " Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.217400 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb891c7f-db3b-4e11-a6dd-9bad582343a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb891c7f-db3b-4e11-a6dd-9bad582343a3" (UID: "cb891c7f-db3b-4e11-a6dd-9bad582343a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.230536 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb891c7f-db3b-4e11-a6dd-9bad582343a3-kube-api-access-4qxsh" (OuterVolumeSpecName: "kube-api-access-4qxsh") pod "cb891c7f-db3b-4e11-a6dd-9bad582343a3" (UID: "cb891c7f-db3b-4e11-a6dd-9bad582343a3"). InnerVolumeSpecName "kube-api-access-4qxsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.320393 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb891c7f-db3b-4e11-a6dd-9bad582343a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.320434 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qxsh\" (UniqueName: \"kubernetes.io/projected/cb891c7f-db3b-4e11-a6dd-9bad582343a3-kube-api-access-4qxsh\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.854532 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.854895 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.865710 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:00:31 crc kubenswrapper[4675]: I1121 14:00:31.927492 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6s6ct"] Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.129829 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6s6ct" event={"ID":"35624ff8-b298-4e69-a4d6-8dd5e3401b07","Type":"ContainerStarted","Data":"6b01cb19f72fab6bbe489028715b71d1f77b11b639d69f12af07c4744ebec6f0"} Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.137010 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rl7zh"] Nov 21 14:00:32 crc kubenswrapper[4675]: E1121 14:00:32.137541 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb891c7f-db3b-4e11-a6dd-9bad582343a3" containerName="mariadb-database-create" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.137552 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb891c7f-db3b-4e11-a6dd-9bad582343a3" containerName="mariadb-database-create" Nov 21 14:00:32 crc kubenswrapper[4675]: E1121 14:00:32.137619 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d21d87-934d-4af7-b8ea-f0e58faa3a5f" containerName="mariadb-account-create" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.137629 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d21d87-934d-4af7-b8ea-f0e58faa3a5f" containerName="mariadb-account-create" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.137831 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb891c7f-db3b-4e11-a6dd-9bad582343a3" containerName="mariadb-database-create" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.137853 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d21d87-934d-4af7-b8ea-f0e58faa3a5f" containerName="mariadb-account-create" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.138774 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.139992 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"17e64a54-c69c-4c4f-b0c6-01b6742785f7","Type":"ContainerStarted","Data":"8c3431c6ce32fbb53879660bbcb7e554e6bf349a7f7f96f04113cea1590f114b"} Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.143640 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.143853 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.193588 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rl7zh"] Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.203237 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-config-data\") pod \"nova-cell1-conductor-db-sync-rl7zh\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.203538 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssbqv\" (UniqueName: \"kubernetes.io/projected/6993d84e-5485-4c53-aaa7-9ecce1b9689b-kube-api-access-ssbqv\") pod \"nova-cell1-conductor-db-sync-rl7zh\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.203720 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-scripts\") pod \"nova-cell1-conductor-db-sync-rl7zh\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.203907 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rl7zh\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.306162 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssbqv\" (UniqueName: \"kubernetes.io/projected/6993d84e-5485-4c53-aaa7-9ecce1b9689b-kube-api-access-ssbqv\") pod \"nova-cell1-conductor-db-sync-rl7zh\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.306248 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-scripts\") pod \"nova-cell1-conductor-db-sync-rl7zh\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.306347 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rl7zh\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.306444 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-config-data\") pod \"nova-cell1-conductor-db-sync-rl7zh\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.311576 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-config-data\") pod \"nova-cell1-conductor-db-sync-rl7zh\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.314511 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-scripts\") pod \"nova-cell1-conductor-db-sync-rl7zh\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.331705 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssbqv\" (UniqueName: \"kubernetes.io/projected/6993d84e-5485-4c53-aaa7-9ecce1b9689b-kube-api-access-ssbqv\") pod \"nova-cell1-conductor-db-sync-rl7zh\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.332592 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rl7zh\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.412134 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:00:32 crc kubenswrapper[4675]: W1121 14:00:32.427402 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61dba3cf_1cb1_4641_9435_2eac045c894e.slice/crio-0f662acf783405694f6395544156d0a16194fafe0d4fe17cd75ff056b6a3eff2 WatchSource:0}: Error finding container 0f662acf783405694f6395544156d0a16194fafe0d4fe17cd75ff056b6a3eff2: Status 404 returned error can't find the container with id 0f662acf783405694f6395544156d0a16194fafe0d4fe17cd75ff056b6a3eff2 Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.440827 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.461033 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.483256 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-ksc5h"] Nov 21 14:00:32 crc kubenswrapper[4675]: I1121 14:00:32.528470 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:33 crc kubenswrapper[4675]: I1121 14:00:32.957297 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tmqj7" podUID="f0093acc-562a-48c9-b1d1-bde5cdb129be" containerName="registry-server" probeResult="failure" output=< Nov 21 14:00:33 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:00:33 crc kubenswrapper[4675]: > Nov 21 14:00:33 crc kubenswrapper[4675]: I1121 14:00:33.157409 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1436ed81-90ae-4f3c-b854-539012fbf57e","Type":"ContainerStarted","Data":"fcfa5a9a121020802882282bd2794d67165403b6adb73b7424bfbcf51e4c9dcb"} Nov 21 14:00:33 crc kubenswrapper[4675]: I1121 14:00:33.158503 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"61dba3cf-1cb1-4641-9435-2eac045c894e","Type":"ContainerStarted","Data":"0f662acf783405694f6395544156d0a16194fafe0d4fe17cd75ff056b6a3eff2"} Nov 21 14:00:33 crc kubenswrapper[4675]: I1121 14:00:33.159690 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" event={"ID":"8d812b38-ac4b-4262-8642-bfe5c2b19222","Type":"ContainerStarted","Data":"25af74ea6c8be371ccb804839bdc7de25eabc72168a049607ced6b4f9806c578"} Nov 21 14:00:33 crc kubenswrapper[4675]: I1121 14:00:33.159710 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" event={"ID":"8d812b38-ac4b-4262-8642-bfe5c2b19222","Type":"ContainerStarted","Data":"dfcff063b5976329ba022b96169f20d9de2659998b2804e669f7bb315134a2ba"} Nov 21 14:00:33 crc kubenswrapper[4675]: I1121 14:00:33.167007 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6s6ct" event={"ID":"35624ff8-b298-4e69-a4d6-8dd5e3401b07","Type":"ContainerStarted","Data":"e7ab965dc6e4cc7eed69ca6e5d6ccd026fb5d349195679044f0fe3ddceef504c"} Nov 21 14:00:33 crc kubenswrapper[4675]: I1121 14:00:33.169083 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f05482c-68ed-43cc-9762-29ac233f69d3","Type":"ContainerStarted","Data":"802b0c3d15f79db76b86b3d6ae36739ade4fbdfe8a5bfff74dea50ee64e14862"} Nov 21 14:00:33 crc kubenswrapper[4675]: I1121 14:00:33.232398 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-6s6ct" podStartSLOduration=4.232376565 podStartE2EDuration="4.232376565s" podCreationTimestamp="2025-11-21 14:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:33.21138022 +0000 UTC m=+1709.937794947" watchObservedRunningTime="2025-11-21 14:00:33.232376565 +0000 UTC m=+1709.958791292" Nov 21 14:00:33 crc kubenswrapper[4675]: I1121 14:00:33.848775 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:00:33 crc kubenswrapper[4675]: E1121 14:00:33.849586 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:00:34 crc kubenswrapper[4675]: I1121 14:00:34.186027 4675 generic.go:334] "Generic (PLEG): container finished" podID="8d812b38-ac4b-4262-8642-bfe5c2b19222" containerID="25af74ea6c8be371ccb804839bdc7de25eabc72168a049607ced6b4f9806c578" exitCode=0 Nov 21 14:00:34 crc kubenswrapper[4675]: I1121 14:00:34.187027 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" event={"ID":"8d812b38-ac4b-4262-8642-bfe5c2b19222","Type":"ContainerDied","Data":"25af74ea6c8be371ccb804839bdc7de25eabc72168a049607ced6b4f9806c578"} Nov 21 14:00:34 crc kubenswrapper[4675]: I1121 14:00:34.187076 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rl7zh"] Nov 21 14:00:34 crc kubenswrapper[4675]: I1121 14:00:34.691132 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 14:00:34 crc kubenswrapper[4675]: I1121 14:00:34.718296 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:35 crc kubenswrapper[4675]: I1121 14:00:35.200814 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rl7zh" event={"ID":"6993d84e-5485-4c53-aaa7-9ecce1b9689b","Type":"ContainerStarted","Data":"8909b20e51a8cff1957bfe0df7f792c84ba524f59d52509040b3550526c6d144"} Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.152828 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-jr7sk"] Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.154743 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.158472 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-swnjz" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.167871 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.168149 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.168277 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.228668 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-jr7sk"] Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.288507 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-config-data\") pod \"aodh-db-sync-jr7sk\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.288638 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-scripts\") pod \"aodh-db-sync-jr7sk\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.288682 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ckwz\" (UniqueName: \"kubernetes.io/projected/7b591ff7-fb92-466b-870c-e2138e739b42-kube-api-access-6ckwz\") pod \"aodh-db-sync-jr7sk\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.288749 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-combined-ca-bundle\") pod \"aodh-db-sync-jr7sk\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.296242 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f05482c-68ed-43cc-9762-29ac233f69d3","Type":"ContainerStarted","Data":"2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b"} Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.298775 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"17e64a54-c69c-4c4f-b0c6-01b6742785f7","Type":"ContainerStarted","Data":"97de6efec7ca622936f6872e86b29755bf57aaba3307255cfab2cda2acd5107a"} Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.333672 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.542559009 podStartE2EDuration="7.333653182s" podCreationTimestamp="2025-11-21 14:00:30 +0000 UTC" firstStartedPulling="2025-11-21 14:00:31.839324217 +0000 UTC m=+1708.565738954" lastFinishedPulling="2025-11-21 14:00:36.6304184 +0000 UTC m=+1713.356833127" observedRunningTime="2025-11-21 14:00:37.325251102 +0000 UTC m=+1714.051665829" watchObservedRunningTime="2025-11-21 14:00:37.333653182 +0000 UTC m=+1714.060067909" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.393875 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-config-data\") pod \"aodh-db-sync-jr7sk\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.394121 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-scripts\") pod \"aodh-db-sync-jr7sk\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.394183 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ckwz\" (UniqueName: \"kubernetes.io/projected/7b591ff7-fb92-466b-870c-e2138e739b42-kube-api-access-6ckwz\") pod \"aodh-db-sync-jr7sk\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.394275 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-combined-ca-bundle\") pod \"aodh-db-sync-jr7sk\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.406279 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-config-data\") pod \"aodh-db-sync-jr7sk\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.407779 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-scripts\") pod \"aodh-db-sync-jr7sk\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.409610 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-combined-ca-bundle\") pod \"aodh-db-sync-jr7sk\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.413250 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ckwz\" (UniqueName: \"kubernetes.io/projected/7b591ff7-fb92-466b-870c-e2138e739b42-kube-api-access-6ckwz\") pod \"aodh-db-sync-jr7sk\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:37 crc kubenswrapper[4675]: I1121 14:00:37.582047 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.157479 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-jr7sk"] Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.327305 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jr7sk" event={"ID":"7b591ff7-fb92-466b-870c-e2138e739b42","Type":"ContainerStarted","Data":"d95772c6391edb8e3235e8d320a6178c82363c2bd0d66452d0e03c701745a5e9"} Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.331533 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1436ed81-90ae-4f3c-b854-539012fbf57e","Type":"ContainerStarted","Data":"b709cb2c491401c83b37b53f28bc2276de7de296a91363cd5f7f01141b35638f"} Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.331572 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1436ed81-90ae-4f3c-b854-539012fbf57e","Type":"ContainerStarted","Data":"cf9a74452036255d020783e4df18cdb856fa1eab6861755776866ba1794e5544"} Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.335004 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"61dba3cf-1cb1-4641-9435-2eac045c894e","Type":"ContainerStarted","Data":"83d94673bb9d999d7906139b67e9aa04492987f468bc2d98c03df28cdfffdc5d"} Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.335847 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="61dba3cf-1cb1-4641-9435-2eac045c894e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://83d94673bb9d999d7906139b67e9aa04492987f468bc2d98c03df28cdfffdc5d" gracePeriod=30 Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.342721 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" event={"ID":"8d812b38-ac4b-4262-8642-bfe5c2b19222","Type":"ContainerStarted","Data":"aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca"} Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.342849 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.346890 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f05482c-68ed-43cc-9762-29ac233f69d3","Type":"ContainerStarted","Data":"c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6"} Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.347101 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8f05482c-68ed-43cc-9762-29ac233f69d3" containerName="nova-metadata-log" containerID="cri-o://2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b" gracePeriod=30 Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.347192 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8f05482c-68ed-43cc-9762-29ac233f69d3" containerName="nova-metadata-metadata" containerID="cri-o://c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6" gracePeriod=30 Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.352998 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rl7zh" event={"ID":"6993d84e-5485-4c53-aaa7-9ecce1b9689b","Type":"ContainerStarted","Data":"c441edb88f548b85052c4e3f5fd231272728ce23736d5fe0ce386c13e2538806"} Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.362089 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.134854024 podStartE2EDuration="8.362055888s" podCreationTimestamp="2025-11-21 14:00:30 +0000 UTC" firstStartedPulling="2025-11-21 14:00:32.407918504 +0000 UTC m=+1709.134333231" lastFinishedPulling="2025-11-21 14:00:36.635120368 +0000 UTC m=+1713.361535095" observedRunningTime="2025-11-21 14:00:38.351866604 +0000 UTC m=+1715.078281331" watchObservedRunningTime="2025-11-21 14:00:38.362055888 +0000 UTC m=+1715.088470615" Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.414582 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.198858924 podStartE2EDuration="8.4145599s" podCreationTimestamp="2025-11-21 14:00:30 +0000 UTC" firstStartedPulling="2025-11-21 14:00:32.416192151 +0000 UTC m=+1709.142606878" lastFinishedPulling="2025-11-21 14:00:36.631893127 +0000 UTC m=+1713.358307854" observedRunningTime="2025-11-21 14:00:38.403800652 +0000 UTC m=+1715.130215399" watchObservedRunningTime="2025-11-21 14:00:38.4145599 +0000 UTC m=+1715.140974627" Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.415598 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" podStartSLOduration=8.415588306 podStartE2EDuration="8.415588306s" podCreationTimestamp="2025-11-21 14:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:38.382989932 +0000 UTC m=+1715.109404659" watchObservedRunningTime="2025-11-21 14:00:38.415588306 +0000 UTC m=+1715.142003033" Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.426683 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.23429804 podStartE2EDuration="8.426658313s" podCreationTimestamp="2025-11-21 14:00:30 +0000 UTC" firstStartedPulling="2025-11-21 14:00:32.438229252 +0000 UTC m=+1709.164643979" lastFinishedPulling="2025-11-21 14:00:36.630589525 +0000 UTC m=+1713.357004252" observedRunningTime="2025-11-21 14:00:38.419903534 +0000 UTC m=+1715.146318251" watchObservedRunningTime="2025-11-21 14:00:38.426658313 +0000 UTC m=+1715.153073040" Nov 21 14:00:38 crc kubenswrapper[4675]: I1121 14:00:38.452937 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-rl7zh" podStartSLOduration=6.452917439 podStartE2EDuration="6.452917439s" podCreationTimestamp="2025-11-21 14:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:38.444758745 +0000 UTC m=+1715.171173492" watchObservedRunningTime="2025-11-21 14:00:38.452917439 +0000 UTC m=+1715.179332166" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.013278 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.146025 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvgvg\" (UniqueName: \"kubernetes.io/projected/8f05482c-68ed-43cc-9762-29ac233f69d3-kube-api-access-dvgvg\") pod \"8f05482c-68ed-43cc-9762-29ac233f69d3\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.146106 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f05482c-68ed-43cc-9762-29ac233f69d3-logs\") pod \"8f05482c-68ed-43cc-9762-29ac233f69d3\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.146211 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f05482c-68ed-43cc-9762-29ac233f69d3-config-data\") pod \"8f05482c-68ed-43cc-9762-29ac233f69d3\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.146471 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f05482c-68ed-43cc-9762-29ac233f69d3-combined-ca-bundle\") pod \"8f05482c-68ed-43cc-9762-29ac233f69d3\" (UID: \"8f05482c-68ed-43cc-9762-29ac233f69d3\") " Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.146469 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f05482c-68ed-43cc-9762-29ac233f69d3-logs" (OuterVolumeSpecName: "logs") pod "8f05482c-68ed-43cc-9762-29ac233f69d3" (UID: "8f05482c-68ed-43cc-9762-29ac233f69d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.147167 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f05482c-68ed-43cc-9762-29ac233f69d3-logs\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.155272 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f05482c-68ed-43cc-9762-29ac233f69d3-kube-api-access-dvgvg" (OuterVolumeSpecName: "kube-api-access-dvgvg") pod "8f05482c-68ed-43cc-9762-29ac233f69d3" (UID: "8f05482c-68ed-43cc-9762-29ac233f69d3"). InnerVolumeSpecName "kube-api-access-dvgvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.193246 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f05482c-68ed-43cc-9762-29ac233f69d3-config-data" (OuterVolumeSpecName: "config-data") pod "8f05482c-68ed-43cc-9762-29ac233f69d3" (UID: "8f05482c-68ed-43cc-9762-29ac233f69d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.218324 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f05482c-68ed-43cc-9762-29ac233f69d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f05482c-68ed-43cc-9762-29ac233f69d3" (UID: "8f05482c-68ed-43cc-9762-29ac233f69d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.250847 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvgvg\" (UniqueName: \"kubernetes.io/projected/8f05482c-68ed-43cc-9762-29ac233f69d3-kube-api-access-dvgvg\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.250901 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f05482c-68ed-43cc-9762-29ac233f69d3-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.250916 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f05482c-68ed-43cc-9762-29ac233f69d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.366704 4675 generic.go:334] "Generic (PLEG): container finished" podID="8f05482c-68ed-43cc-9762-29ac233f69d3" containerID="c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6" exitCode=0 Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.366735 4675 generic.go:334] "Generic (PLEG): container finished" podID="8f05482c-68ed-43cc-9762-29ac233f69d3" containerID="2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b" exitCode=143 Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.367144 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f05482c-68ed-43cc-9762-29ac233f69d3","Type":"ContainerDied","Data":"c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6"} Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.367202 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f05482c-68ed-43cc-9762-29ac233f69d3","Type":"ContainerDied","Data":"2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b"} Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.367204 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.367230 4675 scope.go:117] "RemoveContainer" containerID="c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.367216 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f05482c-68ed-43cc-9762-29ac233f69d3","Type":"ContainerDied","Data":"802b0c3d15f79db76b86b3d6ae36739ade4fbdfe8a5bfff74dea50ee64e14862"} Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.397604 4675 scope.go:117] "RemoveContainer" containerID="2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.446810 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.451270 4675 scope.go:117] "RemoveContainer" containerID="c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6" Nov 21 14:00:39 crc kubenswrapper[4675]: E1121 14:00:39.468780 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6\": container with ID starting with c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6 not found: ID does not exist" containerID="c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.468835 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6"} err="failed to get container status \"c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6\": rpc error: code = NotFound desc = could not find container \"c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6\": container with ID starting with c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6 not found: ID does not exist" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.468867 4675 scope.go:117] "RemoveContainer" containerID="2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b" Nov 21 14:00:39 crc kubenswrapper[4675]: E1121 14:00:39.475194 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b\": container with ID starting with 2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b not found: ID does not exist" containerID="2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.475238 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b"} err="failed to get container status \"2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b\": rpc error: code = NotFound desc = could not find container \"2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b\": container with ID starting with 2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b not found: ID does not exist" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.475267 4675 scope.go:117] "RemoveContainer" containerID="c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.475756 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6"} err="failed to get container status \"c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6\": rpc error: code = NotFound desc = could not find container \"c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6\": container with ID starting with c8bd18a81ff0412470c1aa4d6f43b12766fa2b4bc3b0b6fc3401853c79a47cd6 not found: ID does not exist" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.475781 4675 scope.go:117] "RemoveContainer" containerID="2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.476254 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b"} err="failed to get container status \"2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b\": rpc error: code = NotFound desc = could not find container \"2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b\": container with ID starting with 2172b6a9a2f2856f786471e1b4e18e1418dc2de00c5b40df27ec3f98b0d8e31b not found: ID does not exist" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.478379 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.497818 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:39 crc kubenswrapper[4675]: E1121 14:00:39.498669 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f05482c-68ed-43cc-9762-29ac233f69d3" containerName="nova-metadata-log" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.498691 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f05482c-68ed-43cc-9762-29ac233f69d3" containerName="nova-metadata-log" Nov 21 14:00:39 crc kubenswrapper[4675]: E1121 14:00:39.498791 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f05482c-68ed-43cc-9762-29ac233f69d3" containerName="nova-metadata-metadata" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.498806 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f05482c-68ed-43cc-9762-29ac233f69d3" containerName="nova-metadata-metadata" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.499220 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f05482c-68ed-43cc-9762-29ac233f69d3" containerName="nova-metadata-metadata" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.499300 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f05482c-68ed-43cc-9762-29ac233f69d3" containerName="nova-metadata-log" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.501749 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.504487 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.505098 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.513007 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.662962 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/504c7fbb-9077-42da-95bd-aa45b9d13bed-logs\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.663162 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-config-data\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.663199 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fclxp\" (UniqueName: \"kubernetes.io/projected/504c7fbb-9077-42da-95bd-aa45b9d13bed-kube-api-access-fclxp\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.663222 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.663295 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.765999 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-config-data\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.766054 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fclxp\" (UniqueName: \"kubernetes.io/projected/504c7fbb-9077-42da-95bd-aa45b9d13bed-kube-api-access-fclxp\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.766112 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.766192 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.766274 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/504c7fbb-9077-42da-95bd-aa45b9d13bed-logs\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.766675 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/504c7fbb-9077-42da-95bd-aa45b9d13bed-logs\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.771439 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.771566 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-config-data\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.774102 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.807695 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fclxp\" (UniqueName: \"kubernetes.io/projected/504c7fbb-9077-42da-95bd-aa45b9d13bed-kube-api-access-fclxp\") pod \"nova-metadata-0\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " pod="openstack/nova-metadata-0" Nov 21 14:00:39 crc kubenswrapper[4675]: I1121 14:00:39.843801 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 14:00:40 crc kubenswrapper[4675]: W1121 14:00:40.526883 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod504c7fbb_9077_42da_95bd_aa45b9d13bed.slice/crio-23f7c377440a2565fec6d32d213aafd1af3be18713d956148fb72419d625eb68 WatchSource:0}: Error finding container 23f7c377440a2565fec6d32d213aafd1af3be18713d956148fb72419d625eb68: Status 404 returned error can't find the container with id 23f7c377440a2565fec6d32d213aafd1af3be18713d956148fb72419d625eb68 Nov 21 14:00:40 crc kubenswrapper[4675]: I1121 14:00:40.527528 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:40 crc kubenswrapper[4675]: I1121 14:00:40.555629 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 14:00:40 crc kubenswrapper[4675]: I1121 14:00:40.555950 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 14:00:40 crc kubenswrapper[4675]: I1121 14:00:40.604673 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 14:00:40 crc kubenswrapper[4675]: I1121 14:00:40.862133 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f05482c-68ed-43cc-9762-29ac233f69d3" path="/var/lib/kubelet/pods/8f05482c-68ed-43cc-9762-29ac233f69d3/volumes" Nov 21 14:00:40 crc kubenswrapper[4675]: I1121 14:00:40.885031 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:00:41 crc kubenswrapper[4675]: I1121 14:00:41.010528 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 14:00:41 crc kubenswrapper[4675]: I1121 14:00:41.010936 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 14:00:41 crc kubenswrapper[4675]: I1121 14:00:41.422847 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"504c7fbb-9077-42da-95bd-aa45b9d13bed","Type":"ContainerStarted","Data":"814311b857c74a3f7c62973ecec7f15afb1c8f3d338660465ceae679f112521a"} Nov 21 14:00:41 crc kubenswrapper[4675]: I1121 14:00:41.423209 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"504c7fbb-9077-42da-95bd-aa45b9d13bed","Type":"ContainerStarted","Data":"44e259d48a463b3d304140a1a2d1287216e9e63ec4956c787f5b8c595099031e"} Nov 21 14:00:41 crc kubenswrapper[4675]: I1121 14:00:41.423329 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"504c7fbb-9077-42da-95bd-aa45b9d13bed","Type":"ContainerStarted","Data":"23f7c377440a2565fec6d32d213aafd1af3be18713d956148fb72419d625eb68"} Nov 21 14:00:41 crc kubenswrapper[4675]: I1121 14:00:41.468765 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.468745914 podStartE2EDuration="2.468745914s" podCreationTimestamp="2025-11-21 14:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:41.461493783 +0000 UTC m=+1718.187908510" watchObservedRunningTime="2025-11-21 14:00:41.468745914 +0000 UTC m=+1718.195160641" Nov 21 14:00:41 crc kubenswrapper[4675]: I1121 14:00:41.473176 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 14:00:42 crc kubenswrapper[4675]: I1121 14:00:42.093266 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1436ed81-90ae-4f3c-b854-539012fbf57e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.235:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 14:00:42 crc kubenswrapper[4675]: I1121 14:00:42.093494 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1436ed81-90ae-4f3c-b854-539012fbf57e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.235:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 14:00:42 crc kubenswrapper[4675]: I1121 14:00:42.435995 4675 generic.go:334] "Generic (PLEG): container finished" podID="35624ff8-b298-4e69-a4d6-8dd5e3401b07" containerID="e7ab965dc6e4cc7eed69ca6e5d6ccd026fb5d349195679044f0fe3ddceef504c" exitCode=0 Nov 21 14:00:42 crc kubenswrapper[4675]: I1121 14:00:42.436085 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6s6ct" event={"ID":"35624ff8-b298-4e69-a4d6-8dd5e3401b07","Type":"ContainerDied","Data":"e7ab965dc6e4cc7eed69ca6e5d6ccd026fb5d349195679044f0fe3ddceef504c"} Nov 21 14:00:42 crc kubenswrapper[4675]: I1121 14:00:42.920645 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tmqj7" podUID="f0093acc-562a-48c9-b1d1-bde5cdb129be" containerName="registry-server" probeResult="failure" output=< Nov 21 14:00:42 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:00:42 crc kubenswrapper[4675]: > Nov 21 14:00:43 crc kubenswrapper[4675]: I1121 14:00:43.606053 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 14:00:44 crc kubenswrapper[4675]: I1121 14:00:44.844267 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 14:00:44 crc kubenswrapper[4675]: I1121 14:00:44.844711 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 14:00:44 crc kubenswrapper[4675]: I1121 14:00:44.987459 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.114751 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-config-data\") pod \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.114993 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-scripts\") pod \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.115059 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv7pb\" (UniqueName: \"kubernetes.io/projected/35624ff8-b298-4e69-a4d6-8dd5e3401b07-kube-api-access-hv7pb\") pod \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.115174 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-combined-ca-bundle\") pod \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\" (UID: \"35624ff8-b298-4e69-a4d6-8dd5e3401b07\") " Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.119827 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-scripts" (OuterVolumeSpecName: "scripts") pod "35624ff8-b298-4e69-a4d6-8dd5e3401b07" (UID: "35624ff8-b298-4e69-a4d6-8dd5e3401b07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.120005 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35624ff8-b298-4e69-a4d6-8dd5e3401b07-kube-api-access-hv7pb" (OuterVolumeSpecName: "kube-api-access-hv7pb") pod "35624ff8-b298-4e69-a4d6-8dd5e3401b07" (UID: "35624ff8-b298-4e69-a4d6-8dd5e3401b07"). InnerVolumeSpecName "kube-api-access-hv7pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.155683 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35624ff8-b298-4e69-a4d6-8dd5e3401b07" (UID: "35624ff8-b298-4e69-a4d6-8dd5e3401b07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.161940 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-config-data" (OuterVolumeSpecName: "config-data") pod "35624ff8-b298-4e69-a4d6-8dd5e3401b07" (UID: "35624ff8-b298-4e69-a4d6-8dd5e3401b07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.217571 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.217607 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv7pb\" (UniqueName: \"kubernetes.io/projected/35624ff8-b298-4e69-a4d6-8dd5e3401b07-kube-api-access-hv7pb\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.217621 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.217632 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35624ff8-b298-4e69-a4d6-8dd5e3401b07-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.469720 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6s6ct" event={"ID":"35624ff8-b298-4e69-a4d6-8dd5e3401b07","Type":"ContainerDied","Data":"6b01cb19f72fab6bbe489028715b71d1f77b11b639d69f12af07c4744ebec6f0"} Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.469751 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6s6ct" Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.470166 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b01cb19f72fab6bbe489028715b71d1f77b11b639d69f12af07c4744ebec6f0" Nov 21 14:00:45 crc kubenswrapper[4675]: I1121 14:00:45.486285 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jr7sk" event={"ID":"7b591ff7-fb92-466b-870c-e2138e739b42","Type":"ContainerStarted","Data":"3998a59caa25b8b6cada834813c1d8186a9b9e1a99d2a2e1d57ddc343932acb8"} Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.061290 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.140983 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ptgnp"] Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.141223 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" podUID="86196f7d-6aff-4774-9ceb-7d5581f8d38a" containerName="dnsmasq-dns" containerID="cri-o://8b4a9f292ced3e486592fa94663b6791dfa5aefbd09a64098395e90f45ef4c65" gracePeriod=10 Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.324150 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.324764 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1436ed81-90ae-4f3c-b854-539012fbf57e" containerName="nova-api-log" containerID="cri-o://cf9a74452036255d020783e4df18cdb856fa1eab6861755776866ba1794e5544" gracePeriod=30 Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.325053 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1436ed81-90ae-4f3c-b854-539012fbf57e" containerName="nova-api-api" containerID="cri-o://b709cb2c491401c83b37b53f28bc2276de7de296a91363cd5f7f01141b35638f" gracePeriod=30 Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.343803 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.344200 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="17e64a54-c69c-4c4f-b0c6-01b6742785f7" containerName="nova-scheduler-scheduler" containerID="cri-o://97de6efec7ca622936f6872e86b29755bf57aaba3307255cfab2cda2acd5107a" gracePeriod=30 Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.367010 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.367295 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="504c7fbb-9077-42da-95bd-aa45b9d13bed" containerName="nova-metadata-log" containerID="cri-o://44e259d48a463b3d304140a1a2d1287216e9e63ec4956c787f5b8c595099031e" gracePeriod=30 Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.367466 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="504c7fbb-9077-42da-95bd-aa45b9d13bed" containerName="nova-metadata-metadata" containerID="cri-o://814311b857c74a3f7c62973ecec7f15afb1c8f3d338660465ceae679f112521a" gracePeriod=30 Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.560661 4675 generic.go:334] "Generic (PLEG): container finished" podID="86196f7d-6aff-4774-9ceb-7d5581f8d38a" containerID="8b4a9f292ced3e486592fa94663b6791dfa5aefbd09a64098395e90f45ef4c65" exitCode=0 Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.560733 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" event={"ID":"86196f7d-6aff-4774-9ceb-7d5581f8d38a","Type":"ContainerDied","Data":"8b4a9f292ced3e486592fa94663b6791dfa5aefbd09a64098395e90f45ef4c65"} Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.586206 4675 generic.go:334] "Generic (PLEG): container finished" podID="504c7fbb-9077-42da-95bd-aa45b9d13bed" containerID="44e259d48a463b3d304140a1a2d1287216e9e63ec4956c787f5b8c595099031e" exitCode=143 Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.586286 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"504c7fbb-9077-42da-95bd-aa45b9d13bed","Type":"ContainerDied","Data":"44e259d48a463b3d304140a1a2d1287216e9e63ec4956c787f5b8c595099031e"} Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.608010 4675 generic.go:334] "Generic (PLEG): container finished" podID="1436ed81-90ae-4f3c-b854-539012fbf57e" containerID="cf9a74452036255d020783e4df18cdb856fa1eab6861755776866ba1794e5544" exitCode=143 Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.609820 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1436ed81-90ae-4f3c-b854-539012fbf57e","Type":"ContainerDied","Data":"cf9a74452036255d020783e4df18cdb856fa1eab6861755776866ba1794e5544"} Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.647827 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-jr7sk" podStartSLOduration=2.780311485 podStartE2EDuration="9.64780616s" podCreationTimestamp="2025-11-21 14:00:37 +0000 UTC" firstStartedPulling="2025-11-21 14:00:38.165262691 +0000 UTC m=+1714.891677418" lastFinishedPulling="2025-11-21 14:00:45.032757366 +0000 UTC m=+1721.759172093" observedRunningTime="2025-11-21 14:00:46.632286412 +0000 UTC m=+1723.358701149" watchObservedRunningTime="2025-11-21 14:00:46.64780616 +0000 UTC m=+1723.374220887" Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.838634 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.868682 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-dns-svc\") pod \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.868759 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-dns-swift-storage-0\") pod \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.868828 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-ovsdbserver-nb\") pod \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.868895 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-config\") pod \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.868952 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j42d\" (UniqueName: \"kubernetes.io/projected/86196f7d-6aff-4774-9ceb-7d5581f8d38a-kube-api-access-8j42d\") pod \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.869086 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-ovsdbserver-sb\") pod \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\" (UID: \"86196f7d-6aff-4774-9ceb-7d5581f8d38a\") " Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.886123 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86196f7d-6aff-4774-9ceb-7d5581f8d38a-kube-api-access-8j42d" (OuterVolumeSpecName: "kube-api-access-8j42d") pod "86196f7d-6aff-4774-9ceb-7d5581f8d38a" (UID: "86196f7d-6aff-4774-9ceb-7d5581f8d38a"). InnerVolumeSpecName "kube-api-access-8j42d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.971923 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j42d\" (UniqueName: \"kubernetes.io/projected/86196f7d-6aff-4774-9ceb-7d5581f8d38a-kube-api-access-8j42d\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:46 crc kubenswrapper[4675]: I1121 14:00:46.999839 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "86196f7d-6aff-4774-9ceb-7d5581f8d38a" (UID: "86196f7d-6aff-4774-9ceb-7d5581f8d38a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.004576 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "86196f7d-6aff-4774-9ceb-7d5581f8d38a" (UID: "86196f7d-6aff-4774-9ceb-7d5581f8d38a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.015807 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "86196f7d-6aff-4774-9ceb-7d5581f8d38a" (UID: "86196f7d-6aff-4774-9ceb-7d5581f8d38a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.028969 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-config" (OuterVolumeSpecName: "config") pod "86196f7d-6aff-4774-9ceb-7d5581f8d38a" (UID: "86196f7d-6aff-4774-9ceb-7d5581f8d38a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.042400 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "86196f7d-6aff-4774-9ceb-7d5581f8d38a" (UID: "86196f7d-6aff-4774-9ceb-7d5581f8d38a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.075745 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.075780 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.075793 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.075802 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-config\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.075810 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86196f7d-6aff-4774-9ceb-7d5581f8d38a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.158396 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.176935 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/504c7fbb-9077-42da-95bd-aa45b9d13bed-logs\") pod \"504c7fbb-9077-42da-95bd-aa45b9d13bed\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.177054 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-combined-ca-bundle\") pod \"504c7fbb-9077-42da-95bd-aa45b9d13bed\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.177143 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fclxp\" (UniqueName: \"kubernetes.io/projected/504c7fbb-9077-42da-95bd-aa45b9d13bed-kube-api-access-fclxp\") pod \"504c7fbb-9077-42da-95bd-aa45b9d13bed\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.177237 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-nova-metadata-tls-certs\") pod \"504c7fbb-9077-42da-95bd-aa45b9d13bed\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.177359 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-config-data\") pod \"504c7fbb-9077-42da-95bd-aa45b9d13bed\" (UID: \"504c7fbb-9077-42da-95bd-aa45b9d13bed\") " Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.177774 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/504c7fbb-9077-42da-95bd-aa45b9d13bed-logs" (OuterVolumeSpecName: "logs") pod "504c7fbb-9077-42da-95bd-aa45b9d13bed" (UID: "504c7fbb-9077-42da-95bd-aa45b9d13bed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.178122 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/504c7fbb-9077-42da-95bd-aa45b9d13bed-logs\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.189328 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/504c7fbb-9077-42da-95bd-aa45b9d13bed-kube-api-access-fclxp" (OuterVolumeSpecName: "kube-api-access-fclxp") pod "504c7fbb-9077-42da-95bd-aa45b9d13bed" (UID: "504c7fbb-9077-42da-95bd-aa45b9d13bed"). InnerVolumeSpecName "kube-api-access-fclxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.239306 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "504c7fbb-9077-42da-95bd-aa45b9d13bed" (UID: "504c7fbb-9077-42da-95bd-aa45b9d13bed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.242268 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-config-data" (OuterVolumeSpecName: "config-data") pod "504c7fbb-9077-42da-95bd-aa45b9d13bed" (UID: "504c7fbb-9077-42da-95bd-aa45b9d13bed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.284014 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.284047 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.284057 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fclxp\" (UniqueName: \"kubernetes.io/projected/504c7fbb-9077-42da-95bd-aa45b9d13bed-kube-api-access-fclxp\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.319308 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "504c7fbb-9077-42da-95bd-aa45b9d13bed" (UID: "504c7fbb-9077-42da-95bd-aa45b9d13bed"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.385725 4675 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/504c7fbb-9077-42da-95bd-aa45b9d13bed-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.620950 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" event={"ID":"86196f7d-6aff-4774-9ceb-7d5581f8d38a","Type":"ContainerDied","Data":"a688f913c9deafd2d490538ca83b668cb68f369183d03aa9910252b732e1dde1"} Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.620974 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-ptgnp" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.621341 4675 scope.go:117] "RemoveContainer" containerID="8b4a9f292ced3e486592fa94663b6791dfa5aefbd09a64098395e90f45ef4c65" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.623501 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.624663 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"504c7fbb-9077-42da-95bd-aa45b9d13bed","Type":"ContainerDied","Data":"814311b857c74a3f7c62973ecec7f15afb1c8f3d338660465ceae679f112521a"} Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.624610 4675 generic.go:334] "Generic (PLEG): container finished" podID="504c7fbb-9077-42da-95bd-aa45b9d13bed" containerID="814311b857c74a3f7c62973ecec7f15afb1c8f3d338660465ceae679f112521a" exitCode=0 Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.624854 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"504c7fbb-9077-42da-95bd-aa45b9d13bed","Type":"ContainerDied","Data":"23f7c377440a2565fec6d32d213aafd1af3be18713d956148fb72419d625eb68"} Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.654100 4675 scope.go:117] "RemoveContainer" containerID="e05441236adf386dd8932b252fa0d03c1f18e13ca3dd284dff0b87be7127e03b" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.676579 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ptgnp"] Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.690345 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ptgnp"] Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.692764 4675 scope.go:117] "RemoveContainer" containerID="814311b857c74a3f7c62973ecec7f15afb1c8f3d338660465ceae679f112521a" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.702708 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.713647 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.724994 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:47 crc kubenswrapper[4675]: E1121 14:00:47.725524 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86196f7d-6aff-4774-9ceb-7d5581f8d38a" containerName="init" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.725540 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="86196f7d-6aff-4774-9ceb-7d5581f8d38a" containerName="init" Nov 21 14:00:47 crc kubenswrapper[4675]: E1121 14:00:47.725556 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35624ff8-b298-4e69-a4d6-8dd5e3401b07" containerName="nova-manage" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.725562 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="35624ff8-b298-4e69-a4d6-8dd5e3401b07" containerName="nova-manage" Nov 21 14:00:47 crc kubenswrapper[4675]: E1121 14:00:47.725577 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504c7fbb-9077-42da-95bd-aa45b9d13bed" containerName="nova-metadata-log" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.725585 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="504c7fbb-9077-42da-95bd-aa45b9d13bed" containerName="nova-metadata-log" Nov 21 14:00:47 crc kubenswrapper[4675]: E1121 14:00:47.725594 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504c7fbb-9077-42da-95bd-aa45b9d13bed" containerName="nova-metadata-metadata" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.725599 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="504c7fbb-9077-42da-95bd-aa45b9d13bed" containerName="nova-metadata-metadata" Nov 21 14:00:47 crc kubenswrapper[4675]: E1121 14:00:47.725616 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86196f7d-6aff-4774-9ceb-7d5581f8d38a" containerName="dnsmasq-dns" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.725622 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="86196f7d-6aff-4774-9ceb-7d5581f8d38a" containerName="dnsmasq-dns" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.725922 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="504c7fbb-9077-42da-95bd-aa45b9d13bed" containerName="nova-metadata-metadata" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.725947 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="504c7fbb-9077-42da-95bd-aa45b9d13bed" containerName="nova-metadata-log" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.725958 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="35624ff8-b298-4e69-a4d6-8dd5e3401b07" containerName="nova-manage" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.725970 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="86196f7d-6aff-4774-9ceb-7d5581f8d38a" containerName="dnsmasq-dns" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.727258 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.730202 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.730433 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.730754 4675 scope.go:117] "RemoveContainer" containerID="44e259d48a463b3d304140a1a2d1287216e9e63ec4956c787f5b8c595099031e" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.740765 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.774761 4675 scope.go:117] "RemoveContainer" containerID="814311b857c74a3f7c62973ecec7f15afb1c8f3d338660465ceae679f112521a" Nov 21 14:00:47 crc kubenswrapper[4675]: E1121 14:00:47.775088 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"814311b857c74a3f7c62973ecec7f15afb1c8f3d338660465ceae679f112521a\": container with ID starting with 814311b857c74a3f7c62973ecec7f15afb1c8f3d338660465ceae679f112521a not found: ID does not exist" containerID="814311b857c74a3f7c62973ecec7f15afb1c8f3d338660465ceae679f112521a" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.775126 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"814311b857c74a3f7c62973ecec7f15afb1c8f3d338660465ceae679f112521a"} err="failed to get container status \"814311b857c74a3f7c62973ecec7f15afb1c8f3d338660465ceae679f112521a\": rpc error: code = NotFound desc = could not find container \"814311b857c74a3f7c62973ecec7f15afb1c8f3d338660465ceae679f112521a\": container with ID starting with 814311b857c74a3f7c62973ecec7f15afb1c8f3d338660465ceae679f112521a not found: ID does not exist" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.775150 4675 scope.go:117] "RemoveContainer" containerID="44e259d48a463b3d304140a1a2d1287216e9e63ec4956c787f5b8c595099031e" Nov 21 14:00:47 crc kubenswrapper[4675]: E1121 14:00:47.775354 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44e259d48a463b3d304140a1a2d1287216e9e63ec4956c787f5b8c595099031e\": container with ID starting with 44e259d48a463b3d304140a1a2d1287216e9e63ec4956c787f5b8c595099031e not found: ID does not exist" containerID="44e259d48a463b3d304140a1a2d1287216e9e63ec4956c787f5b8c595099031e" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.775373 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44e259d48a463b3d304140a1a2d1287216e9e63ec4956c787f5b8c595099031e"} err="failed to get container status \"44e259d48a463b3d304140a1a2d1287216e9e63ec4956c787f5b8c595099031e\": rpc error: code = NotFound desc = could not find container \"44e259d48a463b3d304140a1a2d1287216e9e63ec4956c787f5b8c595099031e\": container with ID starting with 44e259d48a463b3d304140a1a2d1287216e9e63ec4956c787f5b8c595099031e not found: ID does not exist" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.795646 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rsns\" (UniqueName: \"kubernetes.io/projected/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-kube-api-access-7rsns\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.795742 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-config-data\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.795773 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.795859 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.795904 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-logs\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.897625 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rsns\" (UniqueName: \"kubernetes.io/projected/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-kube-api-access-7rsns\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.897763 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-config-data\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.897807 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.897867 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.897936 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-logs\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.898532 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-logs\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.902860 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.903239 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-config-data\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.903626 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:47 crc kubenswrapper[4675]: I1121 14:00:47.917504 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rsns\" (UniqueName: \"kubernetes.io/projected/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-kube-api-access-7rsns\") pod \"nova-metadata-0\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " pod="openstack/nova-metadata-0" Nov 21 14:00:48 crc kubenswrapper[4675]: I1121 14:00:48.060026 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 14:00:48 crc kubenswrapper[4675]: I1121 14:00:48.579739 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:00:48 crc kubenswrapper[4675]: I1121 14:00:48.642267 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c6e10d-66b8-4566-80d2-ed0ce8b08e64","Type":"ContainerStarted","Data":"8e3a9b39dd1686be7fb65659e636e0ddab1461cb408df00f6203491fab9d0942"} Nov 21 14:00:48 crc kubenswrapper[4675]: I1121 14:00:48.849266 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:00:48 crc kubenswrapper[4675]: E1121 14:00:48.849765 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:00:48 crc kubenswrapper[4675]: I1121 14:00:48.876019 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="504c7fbb-9077-42da-95bd-aa45b9d13bed" path="/var/lib/kubelet/pods/504c7fbb-9077-42da-95bd-aa45b9d13bed/volumes" Nov 21 14:00:48 crc kubenswrapper[4675]: I1121 14:00:48.877123 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86196f7d-6aff-4774-9ceb-7d5581f8d38a" path="/var/lib/kubelet/pods/86196f7d-6aff-4774-9ceb-7d5581f8d38a/volumes" Nov 21 14:00:49 crc kubenswrapper[4675]: I1121 14:00:49.659518 4675 generic.go:334] "Generic (PLEG): container finished" podID="17e64a54-c69c-4c4f-b0c6-01b6742785f7" containerID="97de6efec7ca622936f6872e86b29755bf57aaba3307255cfab2cda2acd5107a" exitCode=0 Nov 21 14:00:49 crc kubenswrapper[4675]: I1121 14:00:49.659605 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"17e64a54-c69c-4c4f-b0c6-01b6742785f7","Type":"ContainerDied","Data":"97de6efec7ca622936f6872e86b29755bf57aaba3307255cfab2cda2acd5107a"} Nov 21 14:00:49 crc kubenswrapper[4675]: I1121 14:00:49.662749 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c6e10d-66b8-4566-80d2-ed0ce8b08e64","Type":"ContainerStarted","Data":"34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139"} Nov 21 14:00:49 crc kubenswrapper[4675]: I1121 14:00:49.662838 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c6e10d-66b8-4566-80d2-ed0ce8b08e64","Type":"ContainerStarted","Data":"21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3"} Nov 21 14:00:49 crc kubenswrapper[4675]: I1121 14:00:49.664048 4675 generic.go:334] "Generic (PLEG): container finished" podID="7b591ff7-fb92-466b-870c-e2138e739b42" containerID="3998a59caa25b8b6cada834813c1d8186a9b9e1a99d2a2e1d57ddc343932acb8" exitCode=0 Nov 21 14:00:49 crc kubenswrapper[4675]: I1121 14:00:49.664097 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jr7sk" event={"ID":"7b591ff7-fb92-466b-870c-e2138e739b42","Type":"ContainerDied","Data":"3998a59caa25b8b6cada834813c1d8186a9b9e1a99d2a2e1d57ddc343932acb8"} Nov 21 14:00:49 crc kubenswrapper[4675]: I1121 14:00:49.695046 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.695021319 podStartE2EDuration="2.695021319s" podCreationTimestamp="2025-11-21 14:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:49.683114162 +0000 UTC m=+1726.409528889" watchObservedRunningTime="2025-11-21 14:00:49.695021319 +0000 UTC m=+1726.421436056" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.139614 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.259164 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e64a54-c69c-4c4f-b0c6-01b6742785f7-combined-ca-bundle\") pod \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\" (UID: \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\") " Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.259302 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv7r8\" (UniqueName: \"kubernetes.io/projected/17e64a54-c69c-4c4f-b0c6-01b6742785f7-kube-api-access-mv7r8\") pod \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\" (UID: \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\") " Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.259348 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e64a54-c69c-4c4f-b0c6-01b6742785f7-config-data\") pod \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\" (UID: \"17e64a54-c69c-4c4f-b0c6-01b6742785f7\") " Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.273341 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e64a54-c69c-4c4f-b0c6-01b6742785f7-kube-api-access-mv7r8" (OuterVolumeSpecName: "kube-api-access-mv7r8") pod "17e64a54-c69c-4c4f-b0c6-01b6742785f7" (UID: "17e64a54-c69c-4c4f-b0c6-01b6742785f7"). InnerVolumeSpecName "kube-api-access-mv7r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.301013 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e64a54-c69c-4c4f-b0c6-01b6742785f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17e64a54-c69c-4c4f-b0c6-01b6742785f7" (UID: "17e64a54-c69c-4c4f-b0c6-01b6742785f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.310308 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e64a54-c69c-4c4f-b0c6-01b6742785f7-config-data" (OuterVolumeSpecName: "config-data") pod "17e64a54-c69c-4c4f-b0c6-01b6742785f7" (UID: "17e64a54-c69c-4c4f-b0c6-01b6742785f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.327829 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.363554 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e64a54-c69c-4c4f-b0c6-01b6742785f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.363583 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv7r8\" (UniqueName: \"kubernetes.io/projected/17e64a54-c69c-4c4f-b0c6-01b6742785f7-kube-api-access-mv7r8\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.363594 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e64a54-c69c-4c4f-b0c6-01b6742785f7-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.465318 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1436ed81-90ae-4f3c-b854-539012fbf57e-config-data\") pod \"1436ed81-90ae-4f3c-b854-539012fbf57e\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.465517 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67phk\" (UniqueName: \"kubernetes.io/projected/1436ed81-90ae-4f3c-b854-539012fbf57e-kube-api-access-67phk\") pod \"1436ed81-90ae-4f3c-b854-539012fbf57e\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.465569 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1436ed81-90ae-4f3c-b854-539012fbf57e-combined-ca-bundle\") pod \"1436ed81-90ae-4f3c-b854-539012fbf57e\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.465616 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1436ed81-90ae-4f3c-b854-539012fbf57e-logs\") pod \"1436ed81-90ae-4f3c-b854-539012fbf57e\" (UID: \"1436ed81-90ae-4f3c-b854-539012fbf57e\") " Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.466983 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1436ed81-90ae-4f3c-b854-539012fbf57e-logs" (OuterVolumeSpecName: "logs") pod "1436ed81-90ae-4f3c-b854-539012fbf57e" (UID: "1436ed81-90ae-4f3c-b854-539012fbf57e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.478331 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1436ed81-90ae-4f3c-b854-539012fbf57e-kube-api-access-67phk" (OuterVolumeSpecName: "kube-api-access-67phk") pod "1436ed81-90ae-4f3c-b854-539012fbf57e" (UID: "1436ed81-90ae-4f3c-b854-539012fbf57e"). InnerVolumeSpecName "kube-api-access-67phk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.563191 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1436ed81-90ae-4f3c-b854-539012fbf57e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1436ed81-90ae-4f3c-b854-539012fbf57e" (UID: "1436ed81-90ae-4f3c-b854-539012fbf57e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.568990 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67phk\" (UniqueName: \"kubernetes.io/projected/1436ed81-90ae-4f3c-b854-539012fbf57e-kube-api-access-67phk\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.569024 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1436ed81-90ae-4f3c-b854-539012fbf57e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.569037 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1436ed81-90ae-4f3c-b854-539012fbf57e-logs\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.579261 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1436ed81-90ae-4f3c-b854-539012fbf57e-config-data" (OuterVolumeSpecName: "config-data") pod "1436ed81-90ae-4f3c-b854-539012fbf57e" (UID: "1436ed81-90ae-4f3c-b854-539012fbf57e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.609312 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.673611 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1436ed81-90ae-4f3c-b854-539012fbf57e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.683456 4675 generic.go:334] "Generic (PLEG): container finished" podID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerID="489dcbc613820df87ed0aa8a5664b9624d8cebcca91f2748182374a53eb58c10" exitCode=137 Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.683549 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44deb6cd-3059-4e60-b67c-2c3006654af7","Type":"ContainerDied","Data":"489dcbc613820df87ed0aa8a5664b9624d8cebcca91f2748182374a53eb58c10"} Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.683579 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44deb6cd-3059-4e60-b67c-2c3006654af7","Type":"ContainerDied","Data":"e559d6cecee8c3732d52368929863b424ac24fbb1a6c52970604f2132aea7258"} Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.683596 4675 scope.go:117] "RemoveContainer" containerID="489dcbc613820df87ed0aa8a5664b9624d8cebcca91f2748182374a53eb58c10" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.683617 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.685831 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.686921 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"17e64a54-c69c-4c4f-b0c6-01b6742785f7","Type":"ContainerDied","Data":"8c3431c6ce32fbb53879660bbcb7e554e6bf349a7f7f96f04113cea1590f114b"} Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.692377 4675 generic.go:334] "Generic (PLEG): container finished" podID="1436ed81-90ae-4f3c-b854-539012fbf57e" containerID="b709cb2c491401c83b37b53f28bc2276de7de296a91363cd5f7f01141b35638f" exitCode=0 Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.692426 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.692463 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1436ed81-90ae-4f3c-b854-539012fbf57e","Type":"ContainerDied","Data":"b709cb2c491401c83b37b53f28bc2276de7de296a91363cd5f7f01141b35638f"} Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.692491 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1436ed81-90ae-4f3c-b854-539012fbf57e","Type":"ContainerDied","Data":"fcfa5a9a121020802882282bd2794d67165403b6adb73b7424bfbcf51e4c9dcb"} Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.713888 4675 scope.go:117] "RemoveContainer" containerID="a66708bc3bc42d25cd23cbbaaab85bb1b20c87c0e184ce4d9167a265ad655240" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.730231 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.744057 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.761026 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:00:50 crc kubenswrapper[4675]: E1121 14:00:50.761782 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="proxy-httpd" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.761800 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="proxy-httpd" Nov 21 14:00:50 crc kubenswrapper[4675]: E1121 14:00:50.761864 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1436ed81-90ae-4f3c-b854-539012fbf57e" containerName="nova-api-log" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.761872 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1436ed81-90ae-4f3c-b854-539012fbf57e" containerName="nova-api-log" Nov 21 14:00:50 crc kubenswrapper[4675]: E1121 14:00:50.761905 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="ceilometer-central-agent" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.761911 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="ceilometer-central-agent" Nov 21 14:00:50 crc kubenswrapper[4675]: E1121 14:00:50.761927 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1436ed81-90ae-4f3c-b854-539012fbf57e" containerName="nova-api-api" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.761935 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1436ed81-90ae-4f3c-b854-539012fbf57e" containerName="nova-api-api" Nov 21 14:00:50 crc kubenswrapper[4675]: E1121 14:00:50.761955 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e64a54-c69c-4c4f-b0c6-01b6742785f7" containerName="nova-scheduler-scheduler" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.761961 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e64a54-c69c-4c4f-b0c6-01b6742785f7" containerName="nova-scheduler-scheduler" Nov 21 14:00:50 crc kubenswrapper[4675]: E1121 14:00:50.761987 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="ceilometer-notification-agent" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.761993 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="ceilometer-notification-agent" Nov 21 14:00:50 crc kubenswrapper[4675]: E1121 14:00:50.762006 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="sg-core" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.762012 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="sg-core" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.762311 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="ceilometer-notification-agent" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.762326 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="sg-core" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.762335 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e64a54-c69c-4c4f-b0c6-01b6742785f7" containerName="nova-scheduler-scheduler" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.762344 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1436ed81-90ae-4f3c-b854-539012fbf57e" containerName="nova-api-log" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.762352 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="proxy-httpd" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.762373 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1436ed81-90ae-4f3c-b854-539012fbf57e" containerName="nova-api-api" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.762388 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" containerName="ceilometer-central-agent" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.763351 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.768889 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.780978 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-config-data\") pod \"44deb6cd-3059-4e60-b67c-2c3006654af7\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.781025 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-scripts\") pod \"44deb6cd-3059-4e60-b67c-2c3006654af7\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.781141 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-combined-ca-bundle\") pod \"44deb6cd-3059-4e60-b67c-2c3006654af7\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.781195 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-sg-core-conf-yaml\") pod \"44deb6cd-3059-4e60-b67c-2c3006654af7\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.781211 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz7gk\" (UniqueName: \"kubernetes.io/projected/44deb6cd-3059-4e60-b67c-2c3006654af7-kube-api-access-bz7gk\") pod \"44deb6cd-3059-4e60-b67c-2c3006654af7\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.781256 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44deb6cd-3059-4e60-b67c-2c3006654af7-log-httpd\") pod \"44deb6cd-3059-4e60-b67c-2c3006654af7\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.781275 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44deb6cd-3059-4e60-b67c-2c3006654af7-run-httpd\") pod \"44deb6cd-3059-4e60-b67c-2c3006654af7\" (UID: \"44deb6cd-3059-4e60-b67c-2c3006654af7\") " Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.781925 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.786045 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44deb6cd-3059-4e60-b67c-2c3006654af7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "44deb6cd-3059-4e60-b67c-2c3006654af7" (UID: "44deb6cd-3059-4e60-b67c-2c3006654af7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.795454 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44deb6cd-3059-4e60-b67c-2c3006654af7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "44deb6cd-3059-4e60-b67c-2c3006654af7" (UID: "44deb6cd-3059-4e60-b67c-2c3006654af7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.796823 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44deb6cd-3059-4e60-b67c-2c3006654af7-kube-api-access-bz7gk" (OuterVolumeSpecName: "kube-api-access-bz7gk") pod "44deb6cd-3059-4e60-b67c-2c3006654af7" (UID: "44deb6cd-3059-4e60-b67c-2c3006654af7"). InnerVolumeSpecName "kube-api-access-bz7gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.798505 4675 scope.go:117] "RemoveContainer" containerID="1f8b72684246796b1abcaea9630a29462b563e0ca1e955d021822cf0454cb450" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.805771 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.812544 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-scripts" (OuterVolumeSpecName: "scripts") pod "44deb6cd-3059-4e60-b67c-2c3006654af7" (UID: "44deb6cd-3059-4e60-b67c-2c3006654af7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.821092 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.840992 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.843966 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.847229 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.878224 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1436ed81-90ae-4f3c-b854-539012fbf57e" path="/var/lib/kubelet/pods/1436ed81-90ae-4f3c-b854-539012fbf57e/volumes" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.878863 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e64a54-c69c-4c4f-b0c6-01b6742785f7" path="/var/lib/kubelet/pods/17e64a54-c69c-4c4f-b0c6-01b6742785f7/volumes" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.883222 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bdc914-dba8-4fcf-ba94-31eff03448cb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.883287 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zx9\" (UniqueName: \"kubernetes.io/projected/a8bdc914-dba8-4fcf-ba94-31eff03448cb-kube-api-access-l6zx9\") pod \"nova-scheduler-0\" (UID: \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.883583 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bdc914-dba8-4fcf-ba94-31eff03448cb-config-data\") pod \"nova-scheduler-0\" (UID: \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.884095 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz7gk\" (UniqueName: \"kubernetes.io/projected/44deb6cd-3059-4e60-b67c-2c3006654af7-kube-api-access-bz7gk\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.884115 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44deb6cd-3059-4e60-b67c-2c3006654af7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.884168 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44deb6cd-3059-4e60-b67c-2c3006654af7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.884178 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.886358 4675 scope.go:117] "RemoveContainer" containerID="f078f5bc7005b38b2ee940d9f90a37733a7fd56024f2df12e9a8fc1b6391284f" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.896555 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "44deb6cd-3059-4e60-b67c-2c3006654af7" (UID: "44deb6cd-3059-4e60-b67c-2c3006654af7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.919099 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44deb6cd-3059-4e60-b67c-2c3006654af7" (UID: "44deb6cd-3059-4e60-b67c-2c3006654af7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.962156 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-config-data" (OuterVolumeSpecName: "config-data") pod "44deb6cd-3059-4e60-b67c-2c3006654af7" (UID: "44deb6cd-3059-4e60-b67c-2c3006654af7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.993504 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-config-data\") pod \"nova-api-0\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " pod="openstack/nova-api-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.993906 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bdc914-dba8-4fcf-ba94-31eff03448cb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.993957 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zx9\" (UniqueName: \"kubernetes.io/projected/a8bdc914-dba8-4fcf-ba94-31eff03448cb-kube-api-access-l6zx9\") pod \"nova-scheduler-0\" (UID: \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.994020 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmxqm\" (UniqueName: \"kubernetes.io/projected/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-kube-api-access-kmxqm\") pod \"nova-api-0\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " pod="openstack/nova-api-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.994076 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bdc914-dba8-4fcf-ba94-31eff03448cb-config-data\") pod \"nova-scheduler-0\" (UID: \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.994153 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " pod="openstack/nova-api-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.994181 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-logs\") pod \"nova-api-0\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " pod="openstack/nova-api-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.994261 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.994274 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.994283 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44deb6cd-3059-4e60-b67c-2c3006654af7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.997232 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bdc914-dba8-4fcf-ba94-31eff03448cb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:50 crc kubenswrapper[4675]: I1121 14:00:50.999691 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bdc914-dba8-4fcf-ba94-31eff03448cb-config-data\") pod \"nova-scheduler-0\" (UID: \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.015754 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zx9\" (UniqueName: \"kubernetes.io/projected/a8bdc914-dba8-4fcf-ba94-31eff03448cb-kube-api-access-l6zx9\") pod \"nova-scheduler-0\" (UID: \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\") " pod="openstack/nova-scheduler-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.069851 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.097905 4675 scope.go:117] "RemoveContainer" containerID="489dcbc613820df87ed0aa8a5664b9624d8cebcca91f2748182374a53eb58c10" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.098641 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-config-data\") pod \"nova-api-0\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " pod="openstack/nova-api-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.098743 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmxqm\" (UniqueName: \"kubernetes.io/projected/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-kube-api-access-kmxqm\") pod \"nova-api-0\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " pod="openstack/nova-api-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.098817 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " pod="openstack/nova-api-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.098845 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-logs\") pod \"nova-api-0\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " pod="openstack/nova-api-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.099262 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-logs\") pod \"nova-api-0\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " pod="openstack/nova-api-0" Nov 21 14:00:51 crc kubenswrapper[4675]: E1121 14:00:51.099486 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"489dcbc613820df87ed0aa8a5664b9624d8cebcca91f2748182374a53eb58c10\": container with ID starting with 489dcbc613820df87ed0aa8a5664b9624d8cebcca91f2748182374a53eb58c10 not found: ID does not exist" containerID="489dcbc613820df87ed0aa8a5664b9624d8cebcca91f2748182374a53eb58c10" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.099548 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489dcbc613820df87ed0aa8a5664b9624d8cebcca91f2748182374a53eb58c10"} err="failed to get container status \"489dcbc613820df87ed0aa8a5664b9624d8cebcca91f2748182374a53eb58c10\": rpc error: code = NotFound desc = could not find container \"489dcbc613820df87ed0aa8a5664b9624d8cebcca91f2748182374a53eb58c10\": container with ID starting with 489dcbc613820df87ed0aa8a5664b9624d8cebcca91f2748182374a53eb58c10 not found: ID does not exist" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.099582 4675 scope.go:117] "RemoveContainer" containerID="a66708bc3bc42d25cd23cbbaaab85bb1b20c87c0e184ce4d9167a265ad655240" Nov 21 14:00:51 crc kubenswrapper[4675]: E1121 14:00:51.100409 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66708bc3bc42d25cd23cbbaaab85bb1b20c87c0e184ce4d9167a265ad655240\": container with ID starting with a66708bc3bc42d25cd23cbbaaab85bb1b20c87c0e184ce4d9167a265ad655240 not found: ID does not exist" containerID="a66708bc3bc42d25cd23cbbaaab85bb1b20c87c0e184ce4d9167a265ad655240" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.100449 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66708bc3bc42d25cd23cbbaaab85bb1b20c87c0e184ce4d9167a265ad655240"} err="failed to get container status \"a66708bc3bc42d25cd23cbbaaab85bb1b20c87c0e184ce4d9167a265ad655240\": rpc error: code = NotFound desc = could not find container \"a66708bc3bc42d25cd23cbbaaab85bb1b20c87c0e184ce4d9167a265ad655240\": container with ID starting with a66708bc3bc42d25cd23cbbaaab85bb1b20c87c0e184ce4d9167a265ad655240 not found: ID does not exist" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.100485 4675 scope.go:117] "RemoveContainer" containerID="1f8b72684246796b1abcaea9630a29462b563e0ca1e955d021822cf0454cb450" Nov 21 14:00:51 crc kubenswrapper[4675]: E1121 14:00:51.100784 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f8b72684246796b1abcaea9630a29462b563e0ca1e955d021822cf0454cb450\": container with ID starting with 1f8b72684246796b1abcaea9630a29462b563e0ca1e955d021822cf0454cb450 not found: ID does not exist" containerID="1f8b72684246796b1abcaea9630a29462b563e0ca1e955d021822cf0454cb450" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.100814 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f8b72684246796b1abcaea9630a29462b563e0ca1e955d021822cf0454cb450"} err="failed to get container status \"1f8b72684246796b1abcaea9630a29462b563e0ca1e955d021822cf0454cb450\": rpc error: code = NotFound desc = could not find container \"1f8b72684246796b1abcaea9630a29462b563e0ca1e955d021822cf0454cb450\": container with ID starting with 1f8b72684246796b1abcaea9630a29462b563e0ca1e955d021822cf0454cb450 not found: ID does not exist" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.100832 4675 scope.go:117] "RemoveContainer" containerID="f078f5bc7005b38b2ee940d9f90a37733a7fd56024f2df12e9a8fc1b6391284f" Nov 21 14:00:51 crc kubenswrapper[4675]: E1121 14:00:51.101253 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f078f5bc7005b38b2ee940d9f90a37733a7fd56024f2df12e9a8fc1b6391284f\": container with ID starting with f078f5bc7005b38b2ee940d9f90a37733a7fd56024f2df12e9a8fc1b6391284f not found: ID does not exist" containerID="f078f5bc7005b38b2ee940d9f90a37733a7fd56024f2df12e9a8fc1b6391284f" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.101279 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f078f5bc7005b38b2ee940d9f90a37733a7fd56024f2df12e9a8fc1b6391284f"} err="failed to get container status \"f078f5bc7005b38b2ee940d9f90a37733a7fd56024f2df12e9a8fc1b6391284f\": rpc error: code = NotFound desc = could not find container \"f078f5bc7005b38b2ee940d9f90a37733a7fd56024f2df12e9a8fc1b6391284f\": container with ID starting with f078f5bc7005b38b2ee940d9f90a37733a7fd56024f2df12e9a8fc1b6391284f not found: ID does not exist" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.101293 4675 scope.go:117] "RemoveContainer" containerID="97de6efec7ca622936f6872e86b29755bf57aaba3307255cfab2cda2acd5107a" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.104552 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-config-data\") pod \"nova-api-0\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " pod="openstack/nova-api-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.108661 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " pod="openstack/nova-api-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.108743 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.119466 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.125672 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmxqm\" (UniqueName: \"kubernetes.io/projected/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-kube-api-access-kmxqm\") pod \"nova-api-0\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " pod="openstack/nova-api-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.129127 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.165186 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.173300 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.187758 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.201886 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.201671 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.233240 4675 scope.go:117] "RemoveContainer" containerID="b709cb2c491401c83b37b53f28bc2276de7de296a91363cd5f7f01141b35638f" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.239699 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.279441 4675 scope.go:117] "RemoveContainer" containerID="cf9a74452036255d020783e4df18cdb856fa1eab6861755776866ba1794e5544" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.309021 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-scripts\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.309170 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.309208 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-log-httpd\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.309225 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-config-data\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.309278 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.309314 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-run-httpd\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.309331 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mcdq\" (UniqueName: \"kubernetes.io/projected/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-kube-api-access-6mcdq\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.329214 4675 scope.go:117] "RemoveContainer" containerID="b709cb2c491401c83b37b53f28bc2276de7de296a91363cd5f7f01141b35638f" Nov 21 14:00:51 crc kubenswrapper[4675]: E1121 14:00:51.336570 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b709cb2c491401c83b37b53f28bc2276de7de296a91363cd5f7f01141b35638f\": container with ID starting with b709cb2c491401c83b37b53f28bc2276de7de296a91363cd5f7f01141b35638f not found: ID does not exist" containerID="b709cb2c491401c83b37b53f28bc2276de7de296a91363cd5f7f01141b35638f" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.336629 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b709cb2c491401c83b37b53f28bc2276de7de296a91363cd5f7f01141b35638f"} err="failed to get container status \"b709cb2c491401c83b37b53f28bc2276de7de296a91363cd5f7f01141b35638f\": rpc error: code = NotFound desc = could not find container \"b709cb2c491401c83b37b53f28bc2276de7de296a91363cd5f7f01141b35638f\": container with ID starting with b709cb2c491401c83b37b53f28bc2276de7de296a91363cd5f7f01141b35638f not found: ID does not exist" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.336666 4675 scope.go:117] "RemoveContainer" containerID="cf9a74452036255d020783e4df18cdb856fa1eab6861755776866ba1794e5544" Nov 21 14:00:51 crc kubenswrapper[4675]: E1121 14:00:51.337050 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf9a74452036255d020783e4df18cdb856fa1eab6861755776866ba1794e5544\": container with ID starting with cf9a74452036255d020783e4df18cdb856fa1eab6861755776866ba1794e5544 not found: ID does not exist" containerID="cf9a74452036255d020783e4df18cdb856fa1eab6861755776866ba1794e5544" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.337092 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf9a74452036255d020783e4df18cdb856fa1eab6861755776866ba1794e5544"} err="failed to get container status \"cf9a74452036255d020783e4df18cdb856fa1eab6861755776866ba1794e5544\": rpc error: code = NotFound desc = could not find container \"cf9a74452036255d020783e4df18cdb856fa1eab6861755776866ba1794e5544\": container with ID starting with cf9a74452036255d020783e4df18cdb856fa1eab6861755776866ba1794e5544 not found: ID does not exist" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.383879 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.410607 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ckwz\" (UniqueName: \"kubernetes.io/projected/7b591ff7-fb92-466b-870c-e2138e739b42-kube-api-access-6ckwz\") pod \"7b591ff7-fb92-466b-870c-e2138e739b42\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.410747 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-combined-ca-bundle\") pod \"7b591ff7-fb92-466b-870c-e2138e739b42\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.410779 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-scripts\") pod \"7b591ff7-fb92-466b-870c-e2138e739b42\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.410915 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-config-data\") pod \"7b591ff7-fb92-466b-870c-e2138e739b42\" (UID: \"7b591ff7-fb92-466b-870c-e2138e739b42\") " Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.411215 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mcdq\" (UniqueName: \"kubernetes.io/projected/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-kube-api-access-6mcdq\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.411307 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-scripts\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.411414 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.411456 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-log-httpd\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.411477 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-config-data\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.411536 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.411594 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-run-httpd\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.411945 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-run-httpd\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.415434 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-log-httpd\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.419189 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.419423 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.420432 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-config-data\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.420718 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-scripts\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.540404 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-scripts" (OuterVolumeSpecName: "scripts") pod "7b591ff7-fb92-466b-870c-e2138e739b42" (UID: "7b591ff7-fb92-466b-870c-e2138e739b42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.541334 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b591ff7-fb92-466b-870c-e2138e739b42-kube-api-access-6ckwz" (OuterVolumeSpecName: "kube-api-access-6ckwz") pod "7b591ff7-fb92-466b-870c-e2138e739b42" (UID: "7b591ff7-fb92-466b-870c-e2138e739b42"). InnerVolumeSpecName "kube-api-access-6ckwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.573540 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b591ff7-fb92-466b-870c-e2138e739b42" (UID: "7b591ff7-fb92-466b-870c-e2138e739b42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.573889 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-config-data" (OuterVolumeSpecName: "config-data") pod "7b591ff7-fb92-466b-870c-e2138e739b42" (UID: "7b591ff7-fb92-466b-870c-e2138e739b42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.577213 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mcdq\" (UniqueName: \"kubernetes.io/projected/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-kube-api-access-6mcdq\") pod \"ceilometer-0\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.618633 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ckwz\" (UniqueName: \"kubernetes.io/projected/7b591ff7-fb92-466b-870c-e2138e739b42-kube-api-access-6ckwz\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.618666 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.618676 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.618684 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b591ff7-fb92-466b-870c-e2138e739b42-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.730084 4675 generic.go:334] "Generic (PLEG): container finished" podID="6993d84e-5485-4c53-aaa7-9ecce1b9689b" containerID="c441edb88f548b85052c4e3f5fd231272728ce23736d5fe0ce386c13e2538806" exitCode=0 Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.730166 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rl7zh" event={"ID":"6993d84e-5485-4c53-aaa7-9ecce1b9689b","Type":"ContainerDied","Data":"c441edb88f548b85052c4e3f5fd231272728ce23736d5fe0ce386c13e2538806"} Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.748367 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jr7sk" event={"ID":"7b591ff7-fb92-466b-870c-e2138e739b42","Type":"ContainerDied","Data":"d95772c6391edb8e3235e8d320a6178c82363c2bd0d66452d0e03c701745a5e9"} Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.748414 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d95772c6391edb8e3235e8d320a6178c82363c2bd0d66452d0e03c701745a5e9" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.748416 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jr7sk" Nov 21 14:00:51 crc kubenswrapper[4675]: W1121 14:00:51.783000 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bdc914_dba8_4fcf_ba94_31eff03448cb.slice/crio-5ec584e061baa6705c465e8e8f62e308697d76747d6c12efd0d757a18fc8f45b WatchSource:0}: Error finding container 5ec584e061baa6705c465e8e8f62e308697d76747d6c12efd0d757a18fc8f45b: Status 404 returned error can't find the container with id 5ec584e061baa6705c465e8e8f62e308697d76747d6c12efd0d757a18fc8f45b Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.794424 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.869461 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.890515 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 21 14:00:51 crc kubenswrapper[4675]: E1121 14:00:51.891857 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b591ff7-fb92-466b-870c-e2138e739b42" containerName="aodh-db-sync" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.891883 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b591ff7-fb92-466b-870c-e2138e739b42" containerName="aodh-db-sync" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.892243 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b591ff7-fb92-466b-870c-e2138e739b42" containerName="aodh-db-sync" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.898715 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.903664 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-swnjz" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.903938 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.904082 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.911591 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.937205 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:51 crc kubenswrapper[4675]: I1121 14:00:51.992873 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.012300 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.025914 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-scripts\") pod \"aodh-0\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " pod="openstack/aodh-0" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.025956 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-config-data\") pod \"aodh-0\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " pod="openstack/aodh-0" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.026017 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptlxg\" (UniqueName: \"kubernetes.io/projected/fb1a7dc1-fee4-4671-9117-d653c3873ea8-kube-api-access-ptlxg\") pod \"aodh-0\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " pod="openstack/aodh-0" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.026343 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " pod="openstack/aodh-0" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.130839 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " pod="openstack/aodh-0" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.132630 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-scripts\") pod \"aodh-0\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " pod="openstack/aodh-0" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.132669 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-config-data\") pod \"aodh-0\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " pod="openstack/aodh-0" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.132739 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptlxg\" (UniqueName: \"kubernetes.io/projected/fb1a7dc1-fee4-4671-9117-d653c3873ea8-kube-api-access-ptlxg\") pod \"aodh-0\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " pod="openstack/aodh-0" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.153802 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-scripts\") pod \"aodh-0\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " pod="openstack/aodh-0" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.164007 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " pod="openstack/aodh-0" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.179264 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptlxg\" (UniqueName: \"kubernetes.io/projected/fb1a7dc1-fee4-4671-9117-d653c3873ea8-kube-api-access-ptlxg\") pod \"aodh-0\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " pod="openstack/aodh-0" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.181152 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-config-data\") pod \"aodh-0\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " pod="openstack/aodh-0" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.195205 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tmqj7"] Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.271593 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.533452 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:00:52 crc kubenswrapper[4675]: W1121 14:00:52.548651 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ca04b3a_e004_41ba_a3d7_e0c34ee9adbf.slice/crio-e4aa7193c402fd0ee0a22443e1d638d1c4699bab3badfa5152e36022b2900ac0 WatchSource:0}: Error finding container e4aa7193c402fd0ee0a22443e1d638d1c4699bab3badfa5152e36022b2900ac0: Status 404 returned error can't find the container with id e4aa7193c402fd0ee0a22443e1d638d1c4699bab3badfa5152e36022b2900ac0 Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.798296 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14","Type":"ContainerStarted","Data":"a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9"} Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.798344 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14","Type":"ContainerStarted","Data":"ce10b5be29cae11a9a2f15b4d9fe7f79fa6e63cddbdcf2c1e475cd8eee140a34"} Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.798357 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14","Type":"ContainerStarted","Data":"f694966674fcb25b20fa614370f39fd7431eb45365992ebb1de6752c3755badc"} Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.803600 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf","Type":"ContainerStarted","Data":"e4aa7193c402fd0ee0a22443e1d638d1c4699bab3badfa5152e36022b2900ac0"} Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.819207 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a8bdc914-dba8-4fcf-ba94-31eff03448cb","Type":"ContainerStarted","Data":"956ea0c634d63ec2d9912242297b1bf2bd8ec367713f16790313c86de4e3528c"} Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.819256 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a8bdc914-dba8-4fcf-ba94-31eff03448cb","Type":"ContainerStarted","Data":"5ec584e061baa6705c465e8e8f62e308697d76747d6c12efd0d757a18fc8f45b"} Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.838509 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.838485474 podStartE2EDuration="2.838485474s" podCreationTimestamp="2025-11-21 14:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:52.819862319 +0000 UTC m=+1729.546277066" watchObservedRunningTime="2025-11-21 14:00:52.838485474 +0000 UTC m=+1729.564900201" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.849730 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.849715624 podStartE2EDuration="2.849715624s" podCreationTimestamp="2025-11-21 14:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:52.849383916 +0000 UTC m=+1729.575798643" watchObservedRunningTime="2025-11-21 14:00:52.849715624 +0000 UTC m=+1729.576130341" Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.872412 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44deb6cd-3059-4e60-b67c-2c3006654af7" path="/var/lib/kubelet/pods/44deb6cd-3059-4e60-b67c-2c3006654af7/volumes" Nov 21 14:00:52 crc kubenswrapper[4675]: W1121 14:00:52.948508 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-aaa900d9f3f0e310d32205ca1d577c925a7864d94e885e36cba00ea3154395e9 WatchSource:0}: Error finding container aaa900d9f3f0e310d32205ca1d577c925a7864d94e885e36cba00ea3154395e9: Status 404 returned error can't find the container with id aaa900d9f3f0e310d32205ca1d577c925a7864d94e885e36cba00ea3154395e9 Nov 21 14:00:52 crc kubenswrapper[4675]: I1121 14:00:52.961259 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.060356 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.060408 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.504537 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.686400 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-scripts\") pod \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.686461 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-combined-ca-bundle\") pod \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.686548 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-config-data\") pod \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.686718 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssbqv\" (UniqueName: \"kubernetes.io/projected/6993d84e-5485-4c53-aaa7-9ecce1b9689b-kube-api-access-ssbqv\") pod \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\" (UID: \"6993d84e-5485-4c53-aaa7-9ecce1b9689b\") " Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.698344 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-scripts" (OuterVolumeSpecName: "scripts") pod "6993d84e-5485-4c53-aaa7-9ecce1b9689b" (UID: "6993d84e-5485-4c53-aaa7-9ecce1b9689b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.734346 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6993d84e-5485-4c53-aaa7-9ecce1b9689b-kube-api-access-ssbqv" (OuterVolumeSpecName: "kube-api-access-ssbqv") pod "6993d84e-5485-4c53-aaa7-9ecce1b9689b" (UID: "6993d84e-5485-4c53-aaa7-9ecce1b9689b"). InnerVolumeSpecName "kube-api-access-ssbqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.776685 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6993d84e-5485-4c53-aaa7-9ecce1b9689b" (UID: "6993d84e-5485-4c53-aaa7-9ecce1b9689b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.798560 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssbqv\" (UniqueName: \"kubernetes.io/projected/6993d84e-5485-4c53-aaa7-9ecce1b9689b-kube-api-access-ssbqv\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.798597 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.798613 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.827191 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-config-data" (OuterVolumeSpecName: "config-data") pod "6993d84e-5485-4c53-aaa7-9ecce1b9689b" (UID: "6993d84e-5485-4c53-aaa7-9ecce1b9689b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.851388 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf","Type":"ContainerStarted","Data":"7a9139b11b30fea2205b7fcce0dcb8523000c1a4d10570af5316e85c60026e17"} Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.868470 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 14:00:53 crc kubenswrapper[4675]: E1121 14:00:53.868998 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6993d84e-5485-4c53-aaa7-9ecce1b9689b" containerName="nova-cell1-conductor-db-sync" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.869018 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6993d84e-5485-4c53-aaa7-9ecce1b9689b" containerName="nova-cell1-conductor-db-sync" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.869292 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6993d84e-5485-4c53-aaa7-9ecce1b9689b" containerName="nova-cell1-conductor-db-sync" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.874145 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.880458 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rl7zh" event={"ID":"6993d84e-5485-4c53-aaa7-9ecce1b9689b","Type":"ContainerDied","Data":"8909b20e51a8cff1957bfe0df7f792c84ba524f59d52509040b3550526c6d144"} Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.880503 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8909b20e51a8cff1957bfe0df7f792c84ba524f59d52509040b3550526c6d144" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.880588 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rl7zh" Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.891228 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.896383 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tmqj7" podUID="f0093acc-562a-48c9-b1d1-bde5cdb129be" containerName="registry-server" containerID="cri-o://e3e8b1016e7182f0f4e1c8afec733df12bc0bb34ad716a79cf3e97e35c4dd62f" gracePeriod=2 Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.896661 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fb1a7dc1-fee4-4671-9117-d653c3873ea8","Type":"ContainerStarted","Data":"aaa900d9f3f0e310d32205ca1d577c925a7864d94e885e36cba00ea3154395e9"} Nov 21 14:00:53 crc kubenswrapper[4675]: I1121 14:00:53.900589 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6993d84e-5485-4c53-aaa7-9ecce1b9689b-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.003672 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2816c5b-51b2-4542-b0ff-cdc5bb61c948-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a2816c5b-51b2-4542-b0ff-cdc5bb61c948\") " pod="openstack/nova-cell1-conductor-0" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.004081 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2816c5b-51b2-4542-b0ff-cdc5bb61c948-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a2816c5b-51b2-4542-b0ff-cdc5bb61c948\") " pod="openstack/nova-cell1-conductor-0" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.004563 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txr6s\" (UniqueName: \"kubernetes.io/projected/a2816c5b-51b2-4542-b0ff-cdc5bb61c948-kube-api-access-txr6s\") pod \"nova-cell1-conductor-0\" (UID: \"a2816c5b-51b2-4542-b0ff-cdc5bb61c948\") " pod="openstack/nova-cell1-conductor-0" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.106832 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txr6s\" (UniqueName: \"kubernetes.io/projected/a2816c5b-51b2-4542-b0ff-cdc5bb61c948-kube-api-access-txr6s\") pod \"nova-cell1-conductor-0\" (UID: \"a2816c5b-51b2-4542-b0ff-cdc5bb61c948\") " pod="openstack/nova-cell1-conductor-0" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.107138 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2816c5b-51b2-4542-b0ff-cdc5bb61c948-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a2816c5b-51b2-4542-b0ff-cdc5bb61c948\") " pod="openstack/nova-cell1-conductor-0" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.107238 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2816c5b-51b2-4542-b0ff-cdc5bb61c948-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a2816c5b-51b2-4542-b0ff-cdc5bb61c948\") " pod="openstack/nova-cell1-conductor-0" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.112836 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2816c5b-51b2-4542-b0ff-cdc5bb61c948-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a2816c5b-51b2-4542-b0ff-cdc5bb61c948\") " pod="openstack/nova-cell1-conductor-0" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.113612 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2816c5b-51b2-4542-b0ff-cdc5bb61c948-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a2816c5b-51b2-4542-b0ff-cdc5bb61c948\") " pod="openstack/nova-cell1-conductor-0" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.131767 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txr6s\" (UniqueName: \"kubernetes.io/projected/a2816c5b-51b2-4542-b0ff-cdc5bb61c948-kube-api-access-txr6s\") pod \"nova-cell1-conductor-0\" (UID: \"a2816c5b-51b2-4542-b0ff-cdc5bb61c948\") " pod="openstack/nova-cell1-conductor-0" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.241976 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.508550 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.623555 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0093acc-562a-48c9-b1d1-bde5cdb129be-utilities\") pod \"f0093acc-562a-48c9-b1d1-bde5cdb129be\" (UID: \"f0093acc-562a-48c9-b1d1-bde5cdb129be\") " Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.623862 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5mpj\" (UniqueName: \"kubernetes.io/projected/f0093acc-562a-48c9-b1d1-bde5cdb129be-kube-api-access-l5mpj\") pod \"f0093acc-562a-48c9-b1d1-bde5cdb129be\" (UID: \"f0093acc-562a-48c9-b1d1-bde5cdb129be\") " Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.623950 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0093acc-562a-48c9-b1d1-bde5cdb129be-catalog-content\") pod \"f0093acc-562a-48c9-b1d1-bde5cdb129be\" (UID: \"f0093acc-562a-48c9-b1d1-bde5cdb129be\") " Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.625535 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0093acc-562a-48c9-b1d1-bde5cdb129be-utilities" (OuterVolumeSpecName: "utilities") pod "f0093acc-562a-48c9-b1d1-bde5cdb129be" (UID: "f0093acc-562a-48c9-b1d1-bde5cdb129be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.642541 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0093acc-562a-48c9-b1d1-bde5cdb129be-kube-api-access-l5mpj" (OuterVolumeSpecName: "kube-api-access-l5mpj") pod "f0093acc-562a-48c9-b1d1-bde5cdb129be" (UID: "f0093acc-562a-48c9-b1d1-bde5cdb129be"). InnerVolumeSpecName "kube-api-access-l5mpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.729251 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5mpj\" (UniqueName: \"kubernetes.io/projected/f0093acc-562a-48c9-b1d1-bde5cdb129be-kube-api-access-l5mpj\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.729282 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0093acc-562a-48c9-b1d1-bde5cdb129be-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.781422 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0093acc-562a-48c9-b1d1-bde5cdb129be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0093acc-562a-48c9-b1d1-bde5cdb129be" (UID: "f0093acc-562a-48c9-b1d1-bde5cdb129be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.830990 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0093acc-562a-48c9-b1d1-bde5cdb129be-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.915489 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fb1a7dc1-fee4-4671-9117-d653c3873ea8","Type":"ContainerStarted","Data":"936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e"} Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.921692 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf","Type":"ContainerStarted","Data":"cb70904609f11cfe7b7f48013257cc3db428606a4ba88813d7bdaa6ecfaacec1"} Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.931804 4675 generic.go:334] "Generic (PLEG): container finished" podID="f0093acc-562a-48c9-b1d1-bde5cdb129be" containerID="e3e8b1016e7182f0f4e1c8afec733df12bc0bb34ad716a79cf3e97e35c4dd62f" exitCode=0 Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.931904 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmqj7" event={"ID":"f0093acc-562a-48c9-b1d1-bde5cdb129be","Type":"ContainerDied","Data":"e3e8b1016e7182f0f4e1c8afec733df12bc0bb34ad716a79cf3e97e35c4dd62f"} Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.931936 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmqj7" event={"ID":"f0093acc-562a-48c9-b1d1-bde5cdb129be","Type":"ContainerDied","Data":"9b21a8bb4d2309b177f0d7cbc0c1a93e7f89c7759f1c7f0657522d70136e87d2"} Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.931957 4675 scope.go:117] "RemoveContainer" containerID="e3e8b1016e7182f0f4e1c8afec733df12bc0bb34ad716a79cf3e97e35c4dd62f" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.932183 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmqj7" Nov 21 14:00:54 crc kubenswrapper[4675]: I1121 14:00:54.962730 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.160246 4675 scope.go:117] "RemoveContainer" containerID="bfeabc15e4e90a3d745dd7f5d2691e90db7afe0d418fc0c5e7a5d29a09c89bd1" Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.179535 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tmqj7"] Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.189801 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tmqj7"] Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.219183 4675 scope.go:117] "RemoveContainer" containerID="3c31855c9170a731e09bb92b617af5870d3d400c1f9b2d88a23de4383827fae1" Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.273108 4675 scope.go:117] "RemoveContainer" containerID="e3e8b1016e7182f0f4e1c8afec733df12bc0bb34ad716a79cf3e97e35c4dd62f" Nov 21 14:00:55 crc kubenswrapper[4675]: E1121 14:00:55.274707 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3e8b1016e7182f0f4e1c8afec733df12bc0bb34ad716a79cf3e97e35c4dd62f\": container with ID starting with e3e8b1016e7182f0f4e1c8afec733df12bc0bb34ad716a79cf3e97e35c4dd62f not found: ID does not exist" containerID="e3e8b1016e7182f0f4e1c8afec733df12bc0bb34ad716a79cf3e97e35c4dd62f" Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.274745 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3e8b1016e7182f0f4e1c8afec733df12bc0bb34ad716a79cf3e97e35c4dd62f"} err="failed to get container status \"e3e8b1016e7182f0f4e1c8afec733df12bc0bb34ad716a79cf3e97e35c4dd62f\": rpc error: code = NotFound desc = could not find container \"e3e8b1016e7182f0f4e1c8afec733df12bc0bb34ad716a79cf3e97e35c4dd62f\": container with ID starting with e3e8b1016e7182f0f4e1c8afec733df12bc0bb34ad716a79cf3e97e35c4dd62f not found: ID does not exist" Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.274766 4675 scope.go:117] "RemoveContainer" containerID="bfeabc15e4e90a3d745dd7f5d2691e90db7afe0d418fc0c5e7a5d29a09c89bd1" Nov 21 14:00:55 crc kubenswrapper[4675]: E1121 14:00:55.275221 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfeabc15e4e90a3d745dd7f5d2691e90db7afe0d418fc0c5e7a5d29a09c89bd1\": container with ID starting with bfeabc15e4e90a3d745dd7f5d2691e90db7afe0d418fc0c5e7a5d29a09c89bd1 not found: ID does not exist" containerID="bfeabc15e4e90a3d745dd7f5d2691e90db7afe0d418fc0c5e7a5d29a09c89bd1" Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.275251 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfeabc15e4e90a3d745dd7f5d2691e90db7afe0d418fc0c5e7a5d29a09c89bd1"} err="failed to get container status \"bfeabc15e4e90a3d745dd7f5d2691e90db7afe0d418fc0c5e7a5d29a09c89bd1\": rpc error: code = NotFound desc = could not find container \"bfeabc15e4e90a3d745dd7f5d2691e90db7afe0d418fc0c5e7a5d29a09c89bd1\": container with ID starting with bfeabc15e4e90a3d745dd7f5d2691e90db7afe0d418fc0c5e7a5d29a09c89bd1 not found: ID does not exist" Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.275269 4675 scope.go:117] "RemoveContainer" containerID="3c31855c9170a731e09bb92b617af5870d3d400c1f9b2d88a23de4383827fae1" Nov 21 14:00:55 crc kubenswrapper[4675]: E1121 14:00:55.275891 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c31855c9170a731e09bb92b617af5870d3d400c1f9b2d88a23de4383827fae1\": container with ID starting with 3c31855c9170a731e09bb92b617af5870d3d400c1f9b2d88a23de4383827fae1 not found: ID does not exist" containerID="3c31855c9170a731e09bb92b617af5870d3d400c1f9b2d88a23de4383827fae1" Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.275914 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c31855c9170a731e09bb92b617af5870d3d400c1f9b2d88a23de4383827fae1"} err="failed to get container status \"3c31855c9170a731e09bb92b617af5870d3d400c1f9b2d88a23de4383827fae1\": rpc error: code = NotFound desc = could not find container \"3c31855c9170a731e09bb92b617af5870d3d400c1f9b2d88a23de4383827fae1\": container with ID starting with 3c31855c9170a731e09bb92b617af5870d3d400c1f9b2d88a23de4383827fae1 not found: ID does not exist" Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.356317 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.520371 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.977021 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf","Type":"ContainerStarted","Data":"50088719636a7610e062c95f7932faa4b2ae44f3f68e83eead635a7634c3a62e"} Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.986605 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a2816c5b-51b2-4542-b0ff-cdc5bb61c948","Type":"ContainerStarted","Data":"48fe5c4025464dae67e0ada9e3d53e7957877aac7a04bfda3111a1de0994d71e"} Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.986645 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a2816c5b-51b2-4542-b0ff-cdc5bb61c948","Type":"ContainerStarted","Data":"72e0e67c601fc768726eda4aa298e78988f07cce84cf2d8b621ff655b8e90bcb"} Nov 21 14:00:55 crc kubenswrapper[4675]: I1121 14:00:55.987384 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 21 14:00:56 crc kubenswrapper[4675]: I1121 14:00:56.120429 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 14:00:56 crc kubenswrapper[4675]: I1121 14:00:56.901231 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0093acc-562a-48c9-b1d1-bde5cdb129be" path="/var/lib/kubelet/pods/f0093acc-562a-48c9-b1d1-bde5cdb129be/volumes" Nov 21 14:00:58 crc kubenswrapper[4675]: I1121 14:00:58.036492 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf","Type":"ContainerStarted","Data":"df141e5ca288e22bbfd59a9e2ad41ff2f2e26cb01477e40ff5cd52884b177e26"} Nov 21 14:00:58 crc kubenswrapper[4675]: I1121 14:00:58.036604 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="ceilometer-central-agent" containerID="cri-o://7a9139b11b30fea2205b7fcce0dcb8523000c1a4d10570af5316e85c60026e17" gracePeriod=30 Nov 21 14:00:58 crc kubenswrapper[4675]: I1121 14:00:58.037096 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="proxy-httpd" containerID="cri-o://df141e5ca288e22bbfd59a9e2ad41ff2f2e26cb01477e40ff5cd52884b177e26" gracePeriod=30 Nov 21 14:00:58 crc kubenswrapper[4675]: I1121 14:00:58.037194 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="sg-core" containerID="cri-o://50088719636a7610e062c95f7932faa4b2ae44f3f68e83eead635a7634c3a62e" gracePeriod=30 Nov 21 14:00:58 crc kubenswrapper[4675]: I1121 14:00:58.037213 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 14:00:58 crc kubenswrapper[4675]: I1121 14:00:58.037228 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="ceilometer-notification-agent" containerID="cri-o://cb70904609f11cfe7b7f48013257cc3db428606a4ba88813d7bdaa6ecfaacec1" gracePeriod=30 Nov 21 14:00:58 crc kubenswrapper[4675]: I1121 14:00:58.040978 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fb1a7dc1-fee4-4671-9117-d653c3873ea8","Type":"ContainerStarted","Data":"04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068"} Nov 21 14:00:58 crc kubenswrapper[4675]: I1121 14:00:58.059965 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=5.059942959 podStartE2EDuration="5.059942959s" podCreationTimestamp="2025-11-21 14:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:56.019660401 +0000 UTC m=+1732.746075138" watchObservedRunningTime="2025-11-21 14:00:58.059942959 +0000 UTC m=+1734.786357686" Nov 21 14:00:58 crc kubenswrapper[4675]: I1121 14:00:58.060333 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 14:00:58 crc kubenswrapper[4675]: I1121 14:00:58.060375 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 14:00:58 crc kubenswrapper[4675]: I1121 14:00:58.069949 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.837826044 podStartE2EDuration="7.069926359s" podCreationTimestamp="2025-11-21 14:00:51 +0000 UTC" firstStartedPulling="2025-11-21 14:00:52.555602606 +0000 UTC m=+1729.282017333" lastFinishedPulling="2025-11-21 14:00:56.787702921 +0000 UTC m=+1733.514117648" observedRunningTime="2025-11-21 14:00:58.058080033 +0000 UTC m=+1734.784494770" watchObservedRunningTime="2025-11-21 14:00:58.069926359 +0000 UTC m=+1734.796341086" Nov 21 14:00:58 crc kubenswrapper[4675]: E1121 14:00:58.072294 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ca04b3a_e004_41ba_a3d7_e0c34ee9adbf.slice/crio-50088719636a7610e062c95f7932faa4b2ae44f3f68e83eead635a7634c3a62e.scope\": RecentStats: unable to find data in memory cache]" Nov 21 14:00:59 crc kubenswrapper[4675]: I1121 14:00:59.056092 4675 generic.go:334] "Generic (PLEG): container finished" podID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerID="df141e5ca288e22bbfd59a9e2ad41ff2f2e26cb01477e40ff5cd52884b177e26" exitCode=0 Nov 21 14:00:59 crc kubenswrapper[4675]: I1121 14:00:59.056405 4675 generic.go:334] "Generic (PLEG): container finished" podID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerID="50088719636a7610e062c95f7932faa4b2ae44f3f68e83eead635a7634c3a62e" exitCode=2 Nov 21 14:00:59 crc kubenswrapper[4675]: I1121 14:00:59.056414 4675 generic.go:334] "Generic (PLEG): container finished" podID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerID="cb70904609f11cfe7b7f48013257cc3db428606a4ba88813d7bdaa6ecfaacec1" exitCode=0 Nov 21 14:00:59 crc kubenswrapper[4675]: I1121 14:00:59.056169 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf","Type":"ContainerDied","Data":"df141e5ca288e22bbfd59a9e2ad41ff2f2e26cb01477e40ff5cd52884b177e26"} Nov 21 14:00:59 crc kubenswrapper[4675]: I1121 14:00:59.056474 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf","Type":"ContainerDied","Data":"50088719636a7610e062c95f7932faa4b2ae44f3f68e83eead635a7634c3a62e"} Nov 21 14:00:59 crc kubenswrapper[4675]: I1121 14:00:59.056487 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf","Type":"ContainerDied","Data":"cb70904609f11cfe7b7f48013257cc3db428606a4ba88813d7bdaa6ecfaacec1"} Nov 21 14:00:59 crc kubenswrapper[4675]: I1121 14:00:59.058729 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fb1a7dc1-fee4-4671-9117-d653c3873ea8","Type":"ContainerStarted","Data":"785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565"} Nov 21 14:00:59 crc kubenswrapper[4675]: I1121 14:00:59.072266 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.240:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 14:00:59 crc kubenswrapper[4675]: I1121 14:00:59.072299 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.240:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.171496 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29395561-scxxf"] Nov 21 14:01:00 crc kubenswrapper[4675]: E1121 14:01:00.172703 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0093acc-562a-48c9-b1d1-bde5cdb129be" containerName="registry-server" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.172724 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0093acc-562a-48c9-b1d1-bde5cdb129be" containerName="registry-server" Nov 21 14:01:00 crc kubenswrapper[4675]: E1121 14:01:00.172748 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0093acc-562a-48c9-b1d1-bde5cdb129be" containerName="extract-content" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.172755 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0093acc-562a-48c9-b1d1-bde5cdb129be" containerName="extract-content" Nov 21 14:01:00 crc kubenswrapper[4675]: E1121 14:01:00.172781 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0093acc-562a-48c9-b1d1-bde5cdb129be" containerName="extract-utilities" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.172790 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0093acc-562a-48c9-b1d1-bde5cdb129be" containerName="extract-utilities" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.173020 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0093acc-562a-48c9-b1d1-bde5cdb129be" containerName="registry-server" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.174111 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.213258 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29395561-scxxf"] Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.292306 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-combined-ca-bundle\") pod \"keystone-cron-29395561-scxxf\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.292497 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-config-data\") pod \"keystone-cron-29395561-scxxf\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.292654 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-fernet-keys\") pod \"keystone-cron-29395561-scxxf\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.292762 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs6dq\" (UniqueName: \"kubernetes.io/projected/6a16764c-944a-48be-ba08-7b46b89ffdba-kube-api-access-hs6dq\") pod \"keystone-cron-29395561-scxxf\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.395508 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-combined-ca-bundle\") pod \"keystone-cron-29395561-scxxf\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.395649 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-config-data\") pod \"keystone-cron-29395561-scxxf\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.395685 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-fernet-keys\") pod \"keystone-cron-29395561-scxxf\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.395720 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs6dq\" (UniqueName: \"kubernetes.io/projected/6a16764c-944a-48be-ba08-7b46b89ffdba-kube-api-access-hs6dq\") pod \"keystone-cron-29395561-scxxf\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.401921 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-combined-ca-bundle\") pod \"keystone-cron-29395561-scxxf\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.401995 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-config-data\") pod \"keystone-cron-29395561-scxxf\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.411889 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-fernet-keys\") pod \"keystone-cron-29395561-scxxf\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.412058 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs6dq\" (UniqueName: \"kubernetes.io/projected/6a16764c-944a-48be-ba08-7b46b89ffdba-kube-api-access-hs6dq\") pod \"keystone-cron-29395561-scxxf\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:00 crc kubenswrapper[4675]: I1121 14:01:00.508005 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:01 crc kubenswrapper[4675]: I1121 14:01:01.120722 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 14:01:01 crc kubenswrapper[4675]: I1121 14:01:01.146110 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fb1a7dc1-fee4-4671-9117-d653c3873ea8","Type":"ContainerStarted","Data":"4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff"} Nov 21 14:01:01 crc kubenswrapper[4675]: I1121 14:01:01.146285 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-api" containerID="cri-o://936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e" gracePeriod=30 Nov 21 14:01:01 crc kubenswrapper[4675]: I1121 14:01:01.146926 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-listener" containerID="cri-o://4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff" gracePeriod=30 Nov 21 14:01:01 crc kubenswrapper[4675]: I1121 14:01:01.146978 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-notifier" containerID="cri-o://785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565" gracePeriod=30 Nov 21 14:01:01 crc kubenswrapper[4675]: I1121 14:01:01.147025 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-evaluator" containerID="cri-o://04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068" gracePeriod=30 Nov 21 14:01:01 crc kubenswrapper[4675]: I1121 14:01:01.149041 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 14:01:01 crc kubenswrapper[4675]: I1121 14:01:01.184776 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.373182703 podStartE2EDuration="10.184753986s" podCreationTimestamp="2025-11-21 14:00:51 +0000 UTC" firstStartedPulling="2025-11-21 14:00:52.951522308 +0000 UTC m=+1729.677937045" lastFinishedPulling="2025-11-21 14:01:00.763093591 +0000 UTC m=+1737.489508328" observedRunningTime="2025-11-21 14:01:01.168212823 +0000 UTC m=+1737.894627550" watchObservedRunningTime="2025-11-21 14:01:01.184753986 +0000 UTC m=+1737.911168713" Nov 21 14:01:01 crc kubenswrapper[4675]: I1121 14:01:01.220805 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 14:01:01 crc kubenswrapper[4675]: I1121 14:01:01.356930 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29395561-scxxf"] Nov 21 14:01:01 crc kubenswrapper[4675]: I1121 14:01:01.385250 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 14:01:01 crc kubenswrapper[4675]: I1121 14:01:01.385293 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 14:01:02 crc kubenswrapper[4675]: I1121 14:01:02.164551 4675 generic.go:334] "Generic (PLEG): container finished" podID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerID="04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068" exitCode=0 Nov 21 14:01:02 crc kubenswrapper[4675]: I1121 14:01:02.165081 4675 generic.go:334] "Generic (PLEG): container finished" podID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerID="936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e" exitCode=0 Nov 21 14:01:02 crc kubenswrapper[4675]: I1121 14:01:02.164638 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fb1a7dc1-fee4-4671-9117-d653c3873ea8","Type":"ContainerDied","Data":"04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068"} Nov 21 14:01:02 crc kubenswrapper[4675]: I1121 14:01:02.165150 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fb1a7dc1-fee4-4671-9117-d653c3873ea8","Type":"ContainerDied","Data":"936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e"} Nov 21 14:01:02 crc kubenswrapper[4675]: I1121 14:01:02.167363 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395561-scxxf" event={"ID":"6a16764c-944a-48be-ba08-7b46b89ffdba","Type":"ContainerStarted","Data":"776b4793f067dde84d561d292f98e16e4a97997b33caa6685cb52fafd0ddf08f"} Nov 21 14:01:02 crc kubenswrapper[4675]: I1121 14:01:02.167492 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395561-scxxf" event={"ID":"6a16764c-944a-48be-ba08-7b46b89ffdba","Type":"ContainerStarted","Data":"d08ef39f63c68ffc0f8717f5a5a738f57261feac19605359786e5af5905ae3db"} Nov 21 14:01:02 crc kubenswrapper[4675]: I1121 14:01:02.196129 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29395561-scxxf" podStartSLOduration=2.196103826 podStartE2EDuration="2.196103826s" podCreationTimestamp="2025-11-21 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:01:02.184276331 +0000 UTC m=+1738.910691058" watchObservedRunningTime="2025-11-21 14:01:02.196103826 +0000 UTC m=+1738.922518553" Nov 21 14:01:02 crc kubenswrapper[4675]: I1121 14:01:02.468611 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.242:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 14:01:02 crc kubenswrapper[4675]: I1121 14:01:02.468668 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.242:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.182205 4675 generic.go:334] "Generic (PLEG): container finished" podID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerID="7a9139b11b30fea2205b7fcce0dcb8523000c1a4d10570af5316e85c60026e17" exitCode=0 Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.182268 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf","Type":"ContainerDied","Data":"7a9139b11b30fea2205b7fcce0dcb8523000c1a4d10570af5316e85c60026e17"} Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.490902 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.594940 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-run-httpd\") pod \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.595620 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-config-data\") pod \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.595642 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" (UID: "6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.595703 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-sg-core-conf-yaml\") pod \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.595831 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mcdq\" (UniqueName: \"kubernetes.io/projected/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-kube-api-access-6mcdq\") pod \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.595868 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-log-httpd\") pod \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.596128 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-combined-ca-bundle\") pod \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.596200 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-scripts\") pod \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\" (UID: \"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf\") " Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.597849 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.598452 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" (UID: "6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.603790 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-kube-api-access-6mcdq" (OuterVolumeSpecName: "kube-api-access-6mcdq") pod "6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" (UID: "6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf"). InnerVolumeSpecName "kube-api-access-6mcdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.604669 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-scripts" (OuterVolumeSpecName: "scripts") pod "6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" (UID: "6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.656778 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" (UID: "6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.700900 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mcdq\" (UniqueName: \"kubernetes.io/projected/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-kube-api-access-6mcdq\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.700928 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.700938 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.700946 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.717349 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" (UID: "6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.793365 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-config-data" (OuterVolumeSpecName: "config-data") pod "6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" (UID: "6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.803429 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.803464 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:03 crc kubenswrapper[4675]: I1121 14:01:03.849557 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:01:03 crc kubenswrapper[4675]: E1121 14:01:03.850001 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.199554 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf","Type":"ContainerDied","Data":"e4aa7193c402fd0ee0a22443e1d638d1c4699bab3badfa5152e36022b2900ac0"} Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.199609 4675 scope.go:117] "RemoveContainer" containerID="df141e5ca288e22bbfd59a9e2ad41ff2f2e26cb01477e40ff5cd52884b177e26" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.199739 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.278935 4675 scope.go:117] "RemoveContainer" containerID="50088719636a7610e062c95f7932faa4b2ae44f3f68e83eead635a7634c3a62e" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.297746 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.301131 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.324350 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.337569 4675 scope.go:117] "RemoveContainer" containerID="cb70904609f11cfe7b7f48013257cc3db428606a4ba88813d7bdaa6ecfaacec1" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.362109 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:04 crc kubenswrapper[4675]: E1121 14:01:04.363164 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="proxy-httpd" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.363191 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="proxy-httpd" Nov 21 14:01:04 crc kubenswrapper[4675]: E1121 14:01:04.363232 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="sg-core" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.363239 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="sg-core" Nov 21 14:01:04 crc kubenswrapper[4675]: E1121 14:01:04.363246 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="ceilometer-central-agent" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.363252 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="ceilometer-central-agent" Nov 21 14:01:04 crc kubenswrapper[4675]: E1121 14:01:04.363277 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="ceilometer-notification-agent" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.363282 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="ceilometer-notification-agent" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.363604 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="sg-core" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.363649 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="ceilometer-notification-agent" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.363663 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="ceilometer-central-agent" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.363677 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" containerName="proxy-httpd" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.367858 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.371231 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.371510 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.375402 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.383554 4675 scope.go:117] "RemoveContainer" containerID="7a9139b11b30fea2205b7fcce0dcb8523000c1a4d10570af5316e85c60026e17" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.562033 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-scripts\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.562112 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-config-data\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.562223 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91766feb-a370-4ea5-8bdb-a7197d87c4de-run-httpd\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.562256 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxr69\" (UniqueName: \"kubernetes.io/projected/91766feb-a370-4ea5-8bdb-a7197d87c4de-kube-api-access-kxr69\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.562288 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91766feb-a370-4ea5-8bdb-a7197d87c4de-log-httpd\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.562307 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.562334 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.664750 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-scripts\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.664832 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-config-data\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.664968 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91766feb-a370-4ea5-8bdb-a7197d87c4de-run-httpd\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.665018 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxr69\" (UniqueName: \"kubernetes.io/projected/91766feb-a370-4ea5-8bdb-a7197d87c4de-kube-api-access-kxr69\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.665060 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91766feb-a370-4ea5-8bdb-a7197d87c4de-log-httpd\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.665130 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.665172 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.665528 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91766feb-a370-4ea5-8bdb-a7197d87c4de-run-httpd\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.665541 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91766feb-a370-4ea5-8bdb-a7197d87c4de-log-httpd\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.670991 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-scripts\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.671658 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-config-data\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.673836 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.676298 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.687981 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxr69\" (UniqueName: \"kubernetes.io/projected/91766feb-a370-4ea5-8bdb-a7197d87c4de-kube-api-access-kxr69\") pod \"ceilometer-0\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.692755 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:01:04 crc kubenswrapper[4675]: I1121 14:01:04.863313 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf" path="/var/lib/kubelet/pods/6ca04b3a-e004-41ba-a3d7-e0c34ee9adbf/volumes" Nov 21 14:01:05 crc kubenswrapper[4675]: I1121 14:01:05.200863 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:05 crc kubenswrapper[4675]: I1121 14:01:05.214464 4675 generic.go:334] "Generic (PLEG): container finished" podID="6a16764c-944a-48be-ba08-7b46b89ffdba" containerID="776b4793f067dde84d561d292f98e16e4a97997b33caa6685cb52fafd0ddf08f" exitCode=0 Nov 21 14:01:05 crc kubenswrapper[4675]: I1121 14:01:05.214543 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395561-scxxf" event={"ID":"6a16764c-944a-48be-ba08-7b46b89ffdba","Type":"ContainerDied","Data":"776b4793f067dde84d561d292f98e16e4a97997b33caa6685cb52fafd0ddf08f"} Nov 21 14:01:05 crc kubenswrapper[4675]: I1121 14:01:05.222873 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91766feb-a370-4ea5-8bdb-a7197d87c4de","Type":"ContainerStarted","Data":"028d01d977c4487349a1fe23033bffa8e6b9108fb87932c83a732eb98eff7e51"} Nov 21 14:01:06 crc kubenswrapper[4675]: I1121 14:01:06.771185 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:06 crc kubenswrapper[4675]: I1121 14:01:06.840443 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-combined-ca-bundle\") pod \"6a16764c-944a-48be-ba08-7b46b89ffdba\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " Nov 21 14:01:06 crc kubenswrapper[4675]: I1121 14:01:06.840579 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-config-data\") pod \"6a16764c-944a-48be-ba08-7b46b89ffdba\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " Nov 21 14:01:06 crc kubenswrapper[4675]: I1121 14:01:06.840674 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs6dq\" (UniqueName: \"kubernetes.io/projected/6a16764c-944a-48be-ba08-7b46b89ffdba-kube-api-access-hs6dq\") pod \"6a16764c-944a-48be-ba08-7b46b89ffdba\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " Nov 21 14:01:06 crc kubenswrapper[4675]: I1121 14:01:06.840695 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-fernet-keys\") pod \"6a16764c-944a-48be-ba08-7b46b89ffdba\" (UID: \"6a16764c-944a-48be-ba08-7b46b89ffdba\") " Nov 21 14:01:06 crc kubenswrapper[4675]: I1121 14:01:06.846194 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a16764c-944a-48be-ba08-7b46b89ffdba-kube-api-access-hs6dq" (OuterVolumeSpecName: "kube-api-access-hs6dq") pod "6a16764c-944a-48be-ba08-7b46b89ffdba" (UID: "6a16764c-944a-48be-ba08-7b46b89ffdba"). InnerVolumeSpecName "kube-api-access-hs6dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:06 crc kubenswrapper[4675]: I1121 14:01:06.846820 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6a16764c-944a-48be-ba08-7b46b89ffdba" (UID: "6a16764c-944a-48be-ba08-7b46b89ffdba"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:06 crc kubenswrapper[4675]: I1121 14:01:06.904893 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a16764c-944a-48be-ba08-7b46b89ffdba" (UID: "6a16764c-944a-48be-ba08-7b46b89ffdba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:06 crc kubenswrapper[4675]: I1121 14:01:06.920587 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-config-data" (OuterVolumeSpecName: "config-data") pod "6a16764c-944a-48be-ba08-7b46b89ffdba" (UID: "6a16764c-944a-48be-ba08-7b46b89ffdba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:06 crc kubenswrapper[4675]: I1121 14:01:06.943815 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:06 crc kubenswrapper[4675]: I1121 14:01:06.943854 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:06 crc kubenswrapper[4675]: I1121 14:01:06.943867 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs6dq\" (UniqueName: \"kubernetes.io/projected/6a16764c-944a-48be-ba08-7b46b89ffdba-kube-api-access-hs6dq\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:06 crc kubenswrapper[4675]: I1121 14:01:06.943880 4675 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a16764c-944a-48be-ba08-7b46b89ffdba-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:07 crc kubenswrapper[4675]: I1121 14:01:07.261895 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91766feb-a370-4ea5-8bdb-a7197d87c4de","Type":"ContainerStarted","Data":"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0"} Nov 21 14:01:07 crc kubenswrapper[4675]: I1121 14:01:07.266627 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395561-scxxf" event={"ID":"6a16764c-944a-48be-ba08-7b46b89ffdba","Type":"ContainerDied","Data":"d08ef39f63c68ffc0f8717f5a5a738f57261feac19605359786e5af5905ae3db"} Nov 21 14:01:07 crc kubenswrapper[4675]: I1121 14:01:07.266668 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d08ef39f63c68ffc0f8717f5a5a738f57261feac19605359786e5af5905ae3db" Nov 21 14:01:07 crc kubenswrapper[4675]: I1121 14:01:07.266723 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395561-scxxf" Nov 21 14:01:08 crc kubenswrapper[4675]: I1121 14:01:08.070214 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 14:01:08 crc kubenswrapper[4675]: I1121 14:01:08.071569 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 14:01:08 crc kubenswrapper[4675]: I1121 14:01:08.088132 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 14:01:08 crc kubenswrapper[4675]: I1121 14:01:08.283161 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91766feb-a370-4ea5-8bdb-a7197d87c4de","Type":"ContainerStarted","Data":"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5"} Nov 21 14:01:08 crc kubenswrapper[4675]: I1121 14:01:08.288470 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 14:01:08 crc kubenswrapper[4675]: I1121 14:01:08.915834 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.004818 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61dba3cf-1cb1-4641-9435-2eac045c894e-combined-ca-bundle\") pod \"61dba3cf-1cb1-4641-9435-2eac045c894e\" (UID: \"61dba3cf-1cb1-4641-9435-2eac045c894e\") " Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.004952 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61dba3cf-1cb1-4641-9435-2eac045c894e-config-data\") pod \"61dba3cf-1cb1-4641-9435-2eac045c894e\" (UID: \"61dba3cf-1cb1-4641-9435-2eac045c894e\") " Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.005024 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljl6r\" (UniqueName: \"kubernetes.io/projected/61dba3cf-1cb1-4641-9435-2eac045c894e-kube-api-access-ljl6r\") pod \"61dba3cf-1cb1-4641-9435-2eac045c894e\" (UID: \"61dba3cf-1cb1-4641-9435-2eac045c894e\") " Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.019536 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61dba3cf-1cb1-4641-9435-2eac045c894e-kube-api-access-ljl6r" (OuterVolumeSpecName: "kube-api-access-ljl6r") pod "61dba3cf-1cb1-4641-9435-2eac045c894e" (UID: "61dba3cf-1cb1-4641-9435-2eac045c894e"). InnerVolumeSpecName "kube-api-access-ljl6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.055630 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61dba3cf-1cb1-4641-9435-2eac045c894e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61dba3cf-1cb1-4641-9435-2eac045c894e" (UID: "61dba3cf-1cb1-4641-9435-2eac045c894e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.060363 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61dba3cf-1cb1-4641-9435-2eac045c894e-config-data" (OuterVolumeSpecName: "config-data") pod "61dba3cf-1cb1-4641-9435-2eac045c894e" (UID: "61dba3cf-1cb1-4641-9435-2eac045c894e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.107343 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61dba3cf-1cb1-4641-9435-2eac045c894e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.107385 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61dba3cf-1cb1-4641-9435-2eac045c894e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.107396 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljl6r\" (UniqueName: \"kubernetes.io/projected/61dba3cf-1cb1-4641-9435-2eac045c894e-kube-api-access-ljl6r\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.297236 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91766feb-a370-4ea5-8bdb-a7197d87c4de","Type":"ContainerStarted","Data":"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d"} Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.299891 4675 generic.go:334] "Generic (PLEG): container finished" podID="61dba3cf-1cb1-4641-9435-2eac045c894e" containerID="83d94673bb9d999d7906139b67e9aa04492987f468bc2d98c03df28cdfffdc5d" exitCode=137 Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.300025 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.300011 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"61dba3cf-1cb1-4641-9435-2eac045c894e","Type":"ContainerDied","Data":"83d94673bb9d999d7906139b67e9aa04492987f468bc2d98c03df28cdfffdc5d"} Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.300118 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"61dba3cf-1cb1-4641-9435-2eac045c894e","Type":"ContainerDied","Data":"0f662acf783405694f6395544156d0a16194fafe0d4fe17cd75ff056b6a3eff2"} Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.300143 4675 scope.go:117] "RemoveContainer" containerID="83d94673bb9d999d7906139b67e9aa04492987f468bc2d98c03df28cdfffdc5d" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.333175 4675 scope.go:117] "RemoveContainer" containerID="83d94673bb9d999d7906139b67e9aa04492987f468bc2d98c03df28cdfffdc5d" Nov 21 14:01:09 crc kubenswrapper[4675]: E1121 14:01:09.333862 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83d94673bb9d999d7906139b67e9aa04492987f468bc2d98c03df28cdfffdc5d\": container with ID starting with 83d94673bb9d999d7906139b67e9aa04492987f468bc2d98c03df28cdfffdc5d not found: ID does not exist" containerID="83d94673bb9d999d7906139b67e9aa04492987f468bc2d98c03df28cdfffdc5d" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.333919 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83d94673bb9d999d7906139b67e9aa04492987f468bc2d98c03df28cdfffdc5d"} err="failed to get container status \"83d94673bb9d999d7906139b67e9aa04492987f468bc2d98c03df28cdfffdc5d\": rpc error: code = NotFound desc = could not find container \"83d94673bb9d999d7906139b67e9aa04492987f468bc2d98c03df28cdfffdc5d\": container with ID starting with 83d94673bb9d999d7906139b67e9aa04492987f468bc2d98c03df28cdfffdc5d not found: ID does not exist" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.352996 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.364647 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.377900 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 14:01:09 crc kubenswrapper[4675]: E1121 14:01:09.378742 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a16764c-944a-48be-ba08-7b46b89ffdba" containerName="keystone-cron" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.378846 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a16764c-944a-48be-ba08-7b46b89ffdba" containerName="keystone-cron" Nov 21 14:01:09 crc kubenswrapper[4675]: E1121 14:01:09.378937 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61dba3cf-1cb1-4641-9435-2eac045c894e" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.378999 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="61dba3cf-1cb1-4641-9435-2eac045c894e" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.379344 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="61dba3cf-1cb1-4641-9435-2eac045c894e" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.379421 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a16764c-944a-48be-ba08-7b46b89ffdba" containerName="keystone-cron" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.380365 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.384539 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.384771 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.384920 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.387149 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.417473 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.417643 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd4ww\" (UniqueName: \"kubernetes.io/projected/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-kube-api-access-cd4ww\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.417905 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.417963 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.418033 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.529206 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.529314 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.529438 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.529554 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.529598 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd4ww\" (UniqueName: \"kubernetes.io/projected/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-kube-api-access-cd4ww\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.536024 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.536115 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.536585 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.543845 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.548178 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd4ww\" (UniqueName: \"kubernetes.io/projected/25ba2d9f-d85c-403d-b7e6-8b17f48e4316-kube-api-access-cd4ww\") pod \"nova-cell1-novncproxy-0\" (UID: \"25ba2d9f-d85c-403d-b7e6-8b17f48e4316\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:09 crc kubenswrapper[4675]: I1121 14:01:09.716431 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:10 crc kubenswrapper[4675]: I1121 14:01:10.313320 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91766feb-a370-4ea5-8bdb-a7197d87c4de","Type":"ContainerStarted","Data":"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f"} Nov 21 14:01:10 crc kubenswrapper[4675]: I1121 14:01:10.314266 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 14:01:10 crc kubenswrapper[4675]: I1121 14:01:10.319079 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 14:01:10 crc kubenswrapper[4675]: I1121 14:01:10.862278 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61dba3cf-1cb1-4641-9435-2eac045c894e" path="/var/lib/kubelet/pods/61dba3cf-1cb1-4641-9435-2eac045c894e/volumes" Nov 21 14:01:11 crc kubenswrapper[4675]: I1121 14:01:11.329122 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"25ba2d9f-d85c-403d-b7e6-8b17f48e4316","Type":"ContainerStarted","Data":"870f9df13b5966ee49e184c090553619262e8a12242c9f2ea66bd3a315046640"} Nov 21 14:01:11 crc kubenswrapper[4675]: I1121 14:01:11.329162 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"25ba2d9f-d85c-403d-b7e6-8b17f48e4316","Type":"ContainerStarted","Data":"ede9a83d145b6b5dce795b0f2b771ebe997cd1c40a454f3ff0344611e6a4334c"} Nov 21 14:01:11 crc kubenswrapper[4675]: I1121 14:01:11.350986 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.910973663 podStartE2EDuration="7.350966944s" podCreationTimestamp="2025-11-21 14:01:04 +0000 UTC" firstStartedPulling="2025-11-21 14:01:05.206190357 +0000 UTC m=+1741.932605084" lastFinishedPulling="2025-11-21 14:01:09.646183638 +0000 UTC m=+1746.372598365" observedRunningTime="2025-11-21 14:01:10.353484151 +0000 UTC m=+1747.079898878" watchObservedRunningTime="2025-11-21 14:01:11.350966944 +0000 UTC m=+1748.077381681" Nov 21 14:01:11 crc kubenswrapper[4675]: I1121 14:01:11.358024 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.357986909 podStartE2EDuration="2.357986909s" podCreationTimestamp="2025-11-21 14:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:01:11.347339623 +0000 UTC m=+1748.073754380" watchObservedRunningTime="2025-11-21 14:01:11.357986909 +0000 UTC m=+1748.084401626" Nov 21 14:01:11 crc kubenswrapper[4675]: I1121 14:01:11.392897 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 14:01:11 crc kubenswrapper[4675]: I1121 14:01:11.393881 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 14:01:11 crc kubenswrapper[4675]: I1121 14:01:11.393980 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 14:01:11 crc kubenswrapper[4675]: I1121 14:01:11.396962 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.341173 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.344090 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.598595 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-p5kck"] Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.609182 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.622873 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-p5kck"] Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.700161 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.700229 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqj7z\" (UniqueName: \"kubernetes.io/projected/e3d2866a-84e9-4475-bab0-69d4aaa9656f-kube-api-access-dqj7z\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.700258 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.700299 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-config\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.700363 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.700405 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.812590 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.812666 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqj7z\" (UniqueName: \"kubernetes.io/projected/e3d2866a-84e9-4475-bab0-69d4aaa9656f-kube-api-access-dqj7z\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.812691 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.812734 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-config\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.812803 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.812848 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.813888 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.814456 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.815047 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.815477 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-config\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.815984 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.902625 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqj7z\" (UniqueName: \"kubernetes.io/projected/e3d2866a-84e9-4475-bab0-69d4aaa9656f-kube-api-access-dqj7z\") pod \"dnsmasq-dns-f84f9ccf-p5kck\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:12 crc kubenswrapper[4675]: I1121 14:01:12.955671 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:13 crc kubenswrapper[4675]: I1121 14:01:13.542453 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-p5kck"] Nov 21 14:01:14 crc kubenswrapper[4675]: I1121 14:01:14.373857 4675 generic.go:334] "Generic (PLEG): container finished" podID="e3d2866a-84e9-4475-bab0-69d4aaa9656f" containerID="e7202bd477c8e47c7c513fb9200962aee7703b5d5fcc982f6ff91e1f18aedfd9" exitCode=0 Nov 21 14:01:14 crc kubenswrapper[4675]: I1121 14:01:14.374001 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" event={"ID":"e3d2866a-84e9-4475-bab0-69d4aaa9656f","Type":"ContainerDied","Data":"e7202bd477c8e47c7c513fb9200962aee7703b5d5fcc982f6ff91e1f18aedfd9"} Nov 21 14:01:14 crc kubenswrapper[4675]: I1121 14:01:14.374499 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" event={"ID":"e3d2866a-84e9-4475-bab0-69d4aaa9656f","Type":"ContainerStarted","Data":"2e8bc88c31daf01827d3d209a853d1137b9874fde4ae8d9bdc76f2d38aaadc46"} Nov 21 14:01:14 crc kubenswrapper[4675]: I1121 14:01:14.717447 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:15 crc kubenswrapper[4675]: I1121 14:01:15.233855 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:01:15 crc kubenswrapper[4675]: I1121 14:01:15.350043 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:15 crc kubenswrapper[4675]: I1121 14:01:15.350490 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="sg-core" containerID="cri-o://0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d" gracePeriod=30 Nov 21 14:01:15 crc kubenswrapper[4675]: I1121 14:01:15.350513 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="proxy-httpd" containerID="cri-o://6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f" gracePeriod=30 Nov 21 14:01:15 crc kubenswrapper[4675]: I1121 14:01:15.350661 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="ceilometer-notification-agent" containerID="cri-o://f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5" gracePeriod=30 Nov 21 14:01:15 crc kubenswrapper[4675]: I1121 14:01:15.350453 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="ceilometer-central-agent" containerID="cri-o://5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0" gracePeriod=30 Nov 21 14:01:15 crc kubenswrapper[4675]: I1121 14:01:15.393608 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" event={"ID":"e3d2866a-84e9-4475-bab0-69d4aaa9656f","Type":"ContainerStarted","Data":"b0a25bfd0b119bacf9da85cb10d40a75cf98db009ab8331e2af6dfe7e4a0fd38"} Nov 21 14:01:15 crc kubenswrapper[4675]: I1121 14:01:15.393671 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" containerName="nova-api-log" containerID="cri-o://ce10b5be29cae11a9a2f15b4d9fe7f79fa6e63cddbdcf2c1e475cd8eee140a34" gracePeriod=30 Nov 21 14:01:15 crc kubenswrapper[4675]: I1121 14:01:15.393796 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" containerName="nova-api-api" containerID="cri-o://a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9" gracePeriod=30 Nov 21 14:01:15 crc kubenswrapper[4675]: I1121 14:01:15.422182 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" podStartSLOduration=3.422157489 podStartE2EDuration="3.422157489s" podCreationTimestamp="2025-11-21 14:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:01:15.413975554 +0000 UTC m=+1752.140390271" watchObservedRunningTime="2025-11-21 14:01:15.422157489 +0000 UTC m=+1752.148572216" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.382538 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.420837 4675 generic.go:334] "Generic (PLEG): container finished" podID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerID="6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f" exitCode=0 Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.420873 4675 generic.go:334] "Generic (PLEG): container finished" podID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerID="0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d" exitCode=2 Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.420883 4675 generic.go:334] "Generic (PLEG): container finished" podID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerID="f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5" exitCode=0 Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.420891 4675 generic.go:334] "Generic (PLEG): container finished" podID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerID="5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0" exitCode=0 Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.420943 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91766feb-a370-4ea5-8bdb-a7197d87c4de","Type":"ContainerDied","Data":"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f"} Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.420977 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91766feb-a370-4ea5-8bdb-a7197d87c4de","Type":"ContainerDied","Data":"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d"} Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.421000 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91766feb-a370-4ea5-8bdb-a7197d87c4de","Type":"ContainerDied","Data":"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5"} Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.421012 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91766feb-a370-4ea5-8bdb-a7197d87c4de","Type":"ContainerDied","Data":"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0"} Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.421023 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91766feb-a370-4ea5-8bdb-a7197d87c4de","Type":"ContainerDied","Data":"028d01d977c4487349a1fe23033bffa8e6b9108fb87932c83a732eb98eff7e51"} Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.421041 4675 scope.go:117] "RemoveContainer" containerID="6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.421090 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.430733 4675 generic.go:334] "Generic (PLEG): container finished" podID="2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" containerID="ce10b5be29cae11a9a2f15b4d9fe7f79fa6e63cddbdcf2c1e475cd8eee140a34" exitCode=143 Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.431978 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14","Type":"ContainerDied","Data":"ce10b5be29cae11a9a2f15b4d9fe7f79fa6e63cddbdcf2c1e475cd8eee140a34"} Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.432022 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.450745 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91766feb-a370-4ea5-8bdb-a7197d87c4de-log-httpd\") pod \"91766feb-a370-4ea5-8bdb-a7197d87c4de\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.450876 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-combined-ca-bundle\") pod \"91766feb-a370-4ea5-8bdb-a7197d87c4de\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.450959 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-sg-core-conf-yaml\") pod \"91766feb-a370-4ea5-8bdb-a7197d87c4de\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.451007 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91766feb-a370-4ea5-8bdb-a7197d87c4de-run-httpd\") pod \"91766feb-a370-4ea5-8bdb-a7197d87c4de\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.451079 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-config-data\") pod \"91766feb-a370-4ea5-8bdb-a7197d87c4de\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.451149 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-scripts\") pod \"91766feb-a370-4ea5-8bdb-a7197d87c4de\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.451178 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxr69\" (UniqueName: \"kubernetes.io/projected/91766feb-a370-4ea5-8bdb-a7197d87c4de-kube-api-access-kxr69\") pod \"91766feb-a370-4ea5-8bdb-a7197d87c4de\" (UID: \"91766feb-a370-4ea5-8bdb-a7197d87c4de\") " Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.452976 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91766feb-a370-4ea5-8bdb-a7197d87c4de-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "91766feb-a370-4ea5-8bdb-a7197d87c4de" (UID: "91766feb-a370-4ea5-8bdb-a7197d87c4de"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.453683 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91766feb-a370-4ea5-8bdb-a7197d87c4de-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "91766feb-a370-4ea5-8bdb-a7197d87c4de" (UID: "91766feb-a370-4ea5-8bdb-a7197d87c4de"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.460104 4675 scope.go:117] "RemoveContainer" containerID="0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.468960 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-scripts" (OuterVolumeSpecName: "scripts") pod "91766feb-a370-4ea5-8bdb-a7197d87c4de" (UID: "91766feb-a370-4ea5-8bdb-a7197d87c4de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.471321 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91766feb-a370-4ea5-8bdb-a7197d87c4de-kube-api-access-kxr69" (OuterVolumeSpecName: "kube-api-access-kxr69") pod "91766feb-a370-4ea5-8bdb-a7197d87c4de" (UID: "91766feb-a370-4ea5-8bdb-a7197d87c4de"). InnerVolumeSpecName "kube-api-access-kxr69". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.515638 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "91766feb-a370-4ea5-8bdb-a7197d87c4de" (UID: "91766feb-a370-4ea5-8bdb-a7197d87c4de"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.553595 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.553623 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91766feb-a370-4ea5-8bdb-a7197d87c4de-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.553633 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.553643 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxr69\" (UniqueName: \"kubernetes.io/projected/91766feb-a370-4ea5-8bdb-a7197d87c4de-kube-api-access-kxr69\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.553659 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91766feb-a370-4ea5-8bdb-a7197d87c4de-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.574270 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91766feb-a370-4ea5-8bdb-a7197d87c4de" (UID: "91766feb-a370-4ea5-8bdb-a7197d87c4de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.620135 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-config-data" (OuterVolumeSpecName: "config-data") pod "91766feb-a370-4ea5-8bdb-a7197d87c4de" (UID: "91766feb-a370-4ea5-8bdb-a7197d87c4de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.657338 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.657379 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91766feb-a370-4ea5-8bdb-a7197d87c4de-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.727220 4675 scope.go:117] "RemoveContainer" containerID="f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.760762 4675 scope.go:117] "RemoveContainer" containerID="5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.761567 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.775107 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.789814 4675 scope.go:117] "RemoveContainer" containerID="6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f" Nov 21 14:01:16 crc kubenswrapper[4675]: E1121 14:01:16.790581 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f\": container with ID starting with 6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f not found: ID does not exist" containerID="6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.790627 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f"} err="failed to get container status \"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f\": rpc error: code = NotFound desc = could not find container \"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f\": container with ID starting with 6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.790654 4675 scope.go:117] "RemoveContainer" containerID="0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d" Nov 21 14:01:16 crc kubenswrapper[4675]: E1121 14:01:16.790955 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d\": container with ID starting with 0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d not found: ID does not exist" containerID="0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.790986 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d"} err="failed to get container status \"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d\": rpc error: code = NotFound desc = could not find container \"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d\": container with ID starting with 0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.791007 4675 scope.go:117] "RemoveContainer" containerID="f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5" Nov 21 14:01:16 crc kubenswrapper[4675]: E1121 14:01:16.791293 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5\": container with ID starting with f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5 not found: ID does not exist" containerID="f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.791321 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5"} err="failed to get container status \"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5\": rpc error: code = NotFound desc = could not find container \"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5\": container with ID starting with f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5 not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.791337 4675 scope.go:117] "RemoveContainer" containerID="5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0" Nov 21 14:01:16 crc kubenswrapper[4675]: E1121 14:01:16.791601 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0\": container with ID starting with 5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0 not found: ID does not exist" containerID="5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.791625 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0"} err="failed to get container status \"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0\": rpc error: code = NotFound desc = could not find container \"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0\": container with ID starting with 5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0 not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.791641 4675 scope.go:117] "RemoveContainer" containerID="6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.791871 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f"} err="failed to get container status \"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f\": rpc error: code = NotFound desc = could not find container \"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f\": container with ID starting with 6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.791894 4675 scope.go:117] "RemoveContainer" containerID="0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.792239 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d"} err="failed to get container status \"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d\": rpc error: code = NotFound desc = could not find container \"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d\": container with ID starting with 0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.792262 4675 scope.go:117] "RemoveContainer" containerID="f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.792518 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5"} err="failed to get container status \"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5\": rpc error: code = NotFound desc = could not find container \"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5\": container with ID starting with f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5 not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.792542 4675 scope.go:117] "RemoveContainer" containerID="5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.792744 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0"} err="failed to get container status \"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0\": rpc error: code = NotFound desc = could not find container \"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0\": container with ID starting with 5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0 not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.792767 4675 scope.go:117] "RemoveContainer" containerID="6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.792970 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f"} err="failed to get container status \"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f\": rpc error: code = NotFound desc = could not find container \"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f\": container with ID starting with 6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.792991 4675 scope.go:117] "RemoveContainer" containerID="0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.795732 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.795945 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d"} err="failed to get container status \"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d\": rpc error: code = NotFound desc = could not find container \"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d\": container with ID starting with 0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.796125 4675 scope.go:117] "RemoveContainer" containerID="f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5" Nov 21 14:01:16 crc kubenswrapper[4675]: E1121 14:01:16.796459 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="ceilometer-notification-agent" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.796482 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="ceilometer-notification-agent" Nov 21 14:01:16 crc kubenswrapper[4675]: E1121 14:01:16.796528 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="ceilometer-central-agent" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.796537 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="ceilometer-central-agent" Nov 21 14:01:16 crc kubenswrapper[4675]: E1121 14:01:16.796573 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="sg-core" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.796582 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="sg-core" Nov 21 14:01:16 crc kubenswrapper[4675]: E1121 14:01:16.796597 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="proxy-httpd" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.796605 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="proxy-httpd" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.796759 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5"} err="failed to get container status \"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5\": rpc error: code = NotFound desc = could not find container \"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5\": container with ID starting with f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5 not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.796827 4675 scope.go:117] "RemoveContainer" containerID="5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.796877 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="sg-core" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.796900 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="proxy-httpd" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.796921 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="ceilometer-central-agent" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.796945 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" containerName="ceilometer-notification-agent" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.798334 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0"} err="failed to get container status \"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0\": rpc error: code = NotFound desc = could not find container \"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0\": container with ID starting with 5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0 not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.798451 4675 scope.go:117] "RemoveContainer" containerID="6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.798792 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f"} err="failed to get container status \"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f\": rpc error: code = NotFound desc = could not find container \"6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f\": container with ID starting with 6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.798878 4675 scope.go:117] "RemoveContainer" containerID="0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.800132 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.802652 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d"} err="failed to get container status \"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d\": rpc error: code = NotFound desc = could not find container \"0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d\": container with ID starting with 0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.802762 4675 scope.go:117] "RemoveContainer" containerID="f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.803357 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5"} err="failed to get container status \"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5\": rpc error: code = NotFound desc = could not find container \"f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5\": container with ID starting with f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5 not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.803420 4675 scope.go:117] "RemoveContainer" containerID="5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.803767 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0"} err="failed to get container status \"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0\": rpc error: code = NotFound desc = could not find container \"5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0\": container with ID starting with 5a425bad2a5f80c8ead11f06ae3a7bf99e1d05fd212e0f7444ba76b55feddba0 not found: ID does not exist" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.805742 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.806020 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.811726 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.862324 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05fdf4e-c229-40af-aafe-55dd5beb6cac-log-httpd\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.862382 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-config-data\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.862448 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.862499 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-scripts\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.862526 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05fdf4e-c229-40af-aafe-55dd5beb6cac-run-httpd\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.862560 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbm72\" (UniqueName: \"kubernetes.io/projected/f05fdf4e-c229-40af-aafe-55dd5beb6cac-kube-api-access-sbm72\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.862643 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.879057 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91766feb-a370-4ea5-8bdb-a7197d87c4de" path="/var/lib/kubelet/pods/91766feb-a370-4ea5-8bdb-a7197d87c4de/volumes" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.965667 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-config-data\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.965714 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05fdf4e-c229-40af-aafe-55dd5beb6cac-log-httpd\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.965819 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.965889 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-scripts\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.966386 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05fdf4e-c229-40af-aafe-55dd5beb6cac-run-httpd\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.966417 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05fdf4e-c229-40af-aafe-55dd5beb6cac-log-httpd\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.966516 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbm72\" (UniqueName: \"kubernetes.io/projected/f05fdf4e-c229-40af-aafe-55dd5beb6cac-kube-api-access-sbm72\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.966749 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.966780 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05fdf4e-c229-40af-aafe-55dd5beb6cac-run-httpd\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.970813 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.972055 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-config-data\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.972612 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-scripts\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.973697 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:16 crc kubenswrapper[4675]: I1121 14:01:16.987150 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbm72\" (UniqueName: \"kubernetes.io/projected/f05fdf4e-c229-40af-aafe-55dd5beb6cac-kube-api-access-sbm72\") pod \"ceilometer-0\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " pod="openstack/ceilometer-0" Nov 21 14:01:17 crc kubenswrapper[4675]: I1121 14:01:17.157873 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:01:17 crc kubenswrapper[4675]: I1121 14:01:17.386565 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:17 crc kubenswrapper[4675]: I1121 14:01:17.621009 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:18 crc kubenswrapper[4675]: I1121 14:01:18.456046 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05fdf4e-c229-40af-aafe-55dd5beb6cac","Type":"ContainerStarted","Data":"513fdb6b46f2d9df30ce94a8db3dcb87d07114073ddad670f8d80b40c9c07d5c"} Nov 21 14:01:18 crc kubenswrapper[4675]: I1121 14:01:18.456447 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05fdf4e-c229-40af-aafe-55dd5beb6cac","Type":"ContainerStarted","Data":"7f7edcad65017f4a950619188e1d2014fdf6ff4bdc45ba713670578aacd1c646"} Nov 21 14:01:18 crc kubenswrapper[4675]: I1121 14:01:18.854020 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:01:18 crc kubenswrapper[4675]: E1121 14:01:18.854616 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.112459 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.220349 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmxqm\" (UniqueName: \"kubernetes.io/projected/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-kube-api-access-kmxqm\") pod \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.220461 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-config-data\") pod \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.220605 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-combined-ca-bundle\") pod \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.220854 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-logs\") pod \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\" (UID: \"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14\") " Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.222403 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-logs" (OuterVolumeSpecName: "logs") pod "2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" (UID: "2f9ecd64-d77b-4ec0-92aa-e6531e6fde14"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.251758 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-kube-api-access-kmxqm" (OuterVolumeSpecName: "kube-api-access-kmxqm") pod "2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" (UID: "2f9ecd64-d77b-4ec0-92aa-e6531e6fde14"). InnerVolumeSpecName "kube-api-access-kmxqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.266157 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-config-data" (OuterVolumeSpecName: "config-data") pod "2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" (UID: "2f9ecd64-d77b-4ec0-92aa-e6531e6fde14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.304354 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" (UID: "2f9ecd64-d77b-4ec0-92aa-e6531e6fde14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.328127 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-logs\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.328173 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmxqm\" (UniqueName: \"kubernetes.io/projected/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-kube-api-access-kmxqm\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.328188 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.328199 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.496098 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05fdf4e-c229-40af-aafe-55dd5beb6cac","Type":"ContainerStarted","Data":"e54054341bcef89e1b6f59450cd3b498799ef59d19d01ea2dda351cc2a2ec858"} Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.525952 4675 generic.go:334] "Generic (PLEG): container finished" podID="2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" containerID="a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9" exitCode=0 Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.526174 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14","Type":"ContainerDied","Data":"a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9"} Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.526266 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f9ecd64-d77b-4ec0-92aa-e6531e6fde14","Type":"ContainerDied","Data":"f694966674fcb25b20fa614370f39fd7431eb45365992ebb1de6752c3755badc"} Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.526344 4675 scope.go:117] "RemoveContainer" containerID="a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.526539 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.563128 4675 scope.go:117] "RemoveContainer" containerID="ce10b5be29cae11a9a2f15b4d9fe7f79fa6e63cddbdcf2c1e475cd8eee140a34" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.568721 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.585817 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.596702 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 14:01:19 crc kubenswrapper[4675]: E1121 14:01:19.597369 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" containerName="nova-api-api" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.597540 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" containerName="nova-api-api" Nov 21 14:01:19 crc kubenswrapper[4675]: E1121 14:01:19.597628 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" containerName="nova-api-log" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.597686 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" containerName="nova-api-log" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.599554 4675 scope.go:117] "RemoveContainer" containerID="a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.600274 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" containerName="nova-api-log" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.600356 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" containerName="nova-api-api" Nov 21 14:01:19 crc kubenswrapper[4675]: E1121 14:01:19.601409 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9\": container with ID starting with a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9 not found: ID does not exist" containerID="a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.601528 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9"} err="failed to get container status \"a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9\": rpc error: code = NotFound desc = could not find container \"a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9\": container with ID starting with a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9 not found: ID does not exist" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.601565 4675 scope.go:117] "RemoveContainer" containerID="ce10b5be29cae11a9a2f15b4d9fe7f79fa6e63cddbdcf2c1e475cd8eee140a34" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.604605 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: E1121 14:01:19.608372 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce10b5be29cae11a9a2f15b4d9fe7f79fa6e63cddbdcf2c1e475cd8eee140a34\": container with ID starting with ce10b5be29cae11a9a2f15b4d9fe7f79fa6e63cddbdcf2c1e475cd8eee140a34 not found: ID does not exist" containerID="ce10b5be29cae11a9a2f15b4d9fe7f79fa6e63cddbdcf2c1e475cd8eee140a34" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.608438 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce10b5be29cae11a9a2f15b4d9fe7f79fa6e63cddbdcf2c1e475cd8eee140a34"} err="failed to get container status \"ce10b5be29cae11a9a2f15b4d9fe7f79fa6e63cddbdcf2c1e475cd8eee140a34\": rpc error: code = NotFound desc = could not find container \"ce10b5be29cae11a9a2f15b4d9fe7f79fa6e63cddbdcf2c1e475cd8eee140a34\": container with ID starting with ce10b5be29cae11a9a2f15b4d9fe7f79fa6e63cddbdcf2c1e475cd8eee140a34 not found: ID does not exist" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.608579 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.610500 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.610842 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.635986 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-logs\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.636042 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-public-tls-certs\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.636194 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.636224 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-internal-tls-certs\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.636274 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-config-data\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.636303 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9fc8\" (UniqueName: \"kubernetes.io/projected/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-kube-api-access-b9fc8\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.641338 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.720526 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.738579 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-logs\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.738737 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-public-tls-certs\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.738838 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.738908 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-internal-tls-certs\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.739002 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-config-data\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.739123 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9fc8\" (UniqueName: \"kubernetes.io/projected/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-kube-api-access-b9fc8\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.741556 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-logs\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.751756 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-config-data\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.756718 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.764735 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-internal-tls-certs\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.765190 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-public-tls-certs\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.778800 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9fc8\" (UniqueName: \"kubernetes.io/projected/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-kube-api-access-b9fc8\") pod \"nova-api-0\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.859455 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 14:01:19 crc kubenswrapper[4675]: I1121 14:01:19.954037 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.392055 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.545533 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05fdf4e-c229-40af-aafe-55dd5beb6cac","Type":"ContainerStarted","Data":"56c128e74c725c7997ef213f1a20a3a3d1d56189608e4fdeb62f86b898de5815"} Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.548921 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19c809ef-4117-44b8-a7c0-8b5f0a60dd51","Type":"ContainerStarted","Data":"706c73891fdc147c59b9b2cd72b77a75381e8c70a8819fab6b024c5f66436298"} Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.574109 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.779133 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bxrvb"] Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.781564 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.787431 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.787707 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.813609 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bxrvb"] Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.864917 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-scripts\") pod \"nova-cell1-cell-mapping-bxrvb\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.865011 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8dnt\" (UniqueName: \"kubernetes.io/projected/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-kube-api-access-n8dnt\") pod \"nova-cell1-cell-mapping-bxrvb\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.865331 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-config-data\") pod \"nova-cell1-cell-mapping-bxrvb\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.865376 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bxrvb\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.865543 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f9ecd64-d77b-4ec0-92aa-e6531e6fde14" path="/var/lib/kubelet/pods/2f9ecd64-d77b-4ec0-92aa-e6531e6fde14/volumes" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.967178 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-config-data\") pod \"nova-cell1-cell-mapping-bxrvb\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.967224 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bxrvb\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.967347 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-scripts\") pod \"nova-cell1-cell-mapping-bxrvb\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.967449 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8dnt\" (UniqueName: \"kubernetes.io/projected/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-kube-api-access-n8dnt\") pod \"nova-cell1-cell-mapping-bxrvb\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.973669 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-scripts\") pod \"nova-cell1-cell-mapping-bxrvb\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.981507 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bxrvb\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.981517 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-config-data\") pod \"nova-cell1-cell-mapping-bxrvb\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:20 crc kubenswrapper[4675]: I1121 14:01:20.992273 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8dnt\" (UniqueName: \"kubernetes.io/projected/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-kube-api-access-n8dnt\") pod \"nova-cell1-cell-mapping-bxrvb\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:21 crc kubenswrapper[4675]: I1121 14:01:21.117534 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:21 crc kubenswrapper[4675]: I1121 14:01:21.570082 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05fdf4e-c229-40af-aafe-55dd5beb6cac","Type":"ContainerStarted","Data":"347faf4cffe96e3f3472d972ddb45c9e2d1c9a619792db9c363cda22b7f7c448"} Nov 21 14:01:21 crc kubenswrapper[4675]: I1121 14:01:21.570538 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 14:01:21 crc kubenswrapper[4675]: I1121 14:01:21.570233 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="ceilometer-central-agent" containerID="cri-o://513fdb6b46f2d9df30ce94a8db3dcb87d07114073ddad670f8d80b40c9c07d5c" gracePeriod=30 Nov 21 14:01:21 crc kubenswrapper[4675]: I1121 14:01:21.570641 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="proxy-httpd" containerID="cri-o://347faf4cffe96e3f3472d972ddb45c9e2d1c9a619792db9c363cda22b7f7c448" gracePeriod=30 Nov 21 14:01:21 crc kubenswrapper[4675]: I1121 14:01:21.570734 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="ceilometer-notification-agent" containerID="cri-o://e54054341bcef89e1b6f59450cd3b498799ef59d19d01ea2dda351cc2a2ec858" gracePeriod=30 Nov 21 14:01:21 crc kubenswrapper[4675]: I1121 14:01:21.570828 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="sg-core" containerID="cri-o://56c128e74c725c7997ef213f1a20a3a3d1d56189608e4fdeb62f86b898de5815" gracePeriod=30 Nov 21 14:01:21 crc kubenswrapper[4675]: I1121 14:01:21.582191 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19c809ef-4117-44b8-a7c0-8b5f0a60dd51","Type":"ContainerStarted","Data":"72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d"} Nov 21 14:01:21 crc kubenswrapper[4675]: I1121 14:01:21.582230 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19c809ef-4117-44b8-a7c0-8b5f0a60dd51","Type":"ContainerStarted","Data":"666b2d561aec9327233b304db13b7983ae66f544ba65f987332615f252022d6d"} Nov 21 14:01:21 crc kubenswrapper[4675]: I1121 14:01:21.597356 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.030775419 podStartE2EDuration="5.597340195s" podCreationTimestamp="2025-11-21 14:01:16 +0000 UTC" firstStartedPulling="2025-11-21 14:01:17.625648967 +0000 UTC m=+1754.352063694" lastFinishedPulling="2025-11-21 14:01:21.192213743 +0000 UTC m=+1757.918628470" observedRunningTime="2025-11-21 14:01:21.59232129 +0000 UTC m=+1758.318736007" watchObservedRunningTime="2025-11-21 14:01:21.597340195 +0000 UTC m=+1758.323754922" Nov 21 14:01:21 crc kubenswrapper[4675]: I1121 14:01:21.634888 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.634862483 podStartE2EDuration="2.634862483s" podCreationTimestamp="2025-11-21 14:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:01:21.620491883 +0000 UTC m=+1758.346906620" watchObservedRunningTime="2025-11-21 14:01:21.634862483 +0000 UTC m=+1758.361277210" Nov 21 14:01:21 crc kubenswrapper[4675]: I1121 14:01:21.708682 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bxrvb"] Nov 21 14:01:22 crc kubenswrapper[4675]: I1121 14:01:22.610343 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bxrvb" event={"ID":"018c03cb-3abb-4ad7-b496-fb3d440d3ec3","Type":"ContainerStarted","Data":"74da10aae6a67f6c1367017d6d2f1b44d9464b443b948804115e87fcfd349340"} Nov 21 14:01:22 crc kubenswrapper[4675]: I1121 14:01:22.610753 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bxrvb" event={"ID":"018c03cb-3abb-4ad7-b496-fb3d440d3ec3","Type":"ContainerStarted","Data":"d2bb1ccd26fbbc4cfa73b85b926966b92383d256017884633da563ab4561e182"} Nov 21 14:01:22 crc kubenswrapper[4675]: I1121 14:01:22.613420 4675 generic.go:334] "Generic (PLEG): container finished" podID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerID="56c128e74c725c7997ef213f1a20a3a3d1d56189608e4fdeb62f86b898de5815" exitCode=2 Nov 21 14:01:22 crc kubenswrapper[4675]: I1121 14:01:22.613448 4675 generic.go:334] "Generic (PLEG): container finished" podID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerID="e54054341bcef89e1b6f59450cd3b498799ef59d19d01ea2dda351cc2a2ec858" exitCode=0 Nov 21 14:01:22 crc kubenswrapper[4675]: I1121 14:01:22.614424 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05fdf4e-c229-40af-aafe-55dd5beb6cac","Type":"ContainerDied","Data":"56c128e74c725c7997ef213f1a20a3a3d1d56189608e4fdeb62f86b898de5815"} Nov 21 14:01:22 crc kubenswrapper[4675]: I1121 14:01:22.614456 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05fdf4e-c229-40af-aafe-55dd5beb6cac","Type":"ContainerDied","Data":"e54054341bcef89e1b6f59450cd3b498799ef59d19d01ea2dda351cc2a2ec858"} Nov 21 14:01:22 crc kubenswrapper[4675]: I1121 14:01:22.635988 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bxrvb" podStartSLOduration=2.635963907 podStartE2EDuration="2.635963907s" podCreationTimestamp="2025-11-21 14:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:01:22.627523986 +0000 UTC m=+1759.353938713" watchObservedRunningTime="2025-11-21 14:01:22.635963907 +0000 UTC m=+1759.362378624" Nov 21 14:01:22 crc kubenswrapper[4675]: I1121 14:01:22.957418 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.033145 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-ksc5h"] Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.033594 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" podUID="8d812b38-ac4b-4262-8642-bfe5c2b19222" containerName="dnsmasq-dns" containerID="cri-o://aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca" gracePeriod=10 Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.601214 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.643634 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-config\") pod \"8d812b38-ac4b-4262-8642-bfe5c2b19222\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.644635 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-dns-svc\") pod \"8d812b38-ac4b-4262-8642-bfe5c2b19222\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.644774 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-ovsdbserver-sb\") pod \"8d812b38-ac4b-4262-8642-bfe5c2b19222\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.645084 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q4hv\" (UniqueName: \"kubernetes.io/projected/8d812b38-ac4b-4262-8642-bfe5c2b19222-kube-api-access-7q4hv\") pod \"8d812b38-ac4b-4262-8642-bfe5c2b19222\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.645236 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-dns-swift-storage-0\") pod \"8d812b38-ac4b-4262-8642-bfe5c2b19222\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.645364 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-ovsdbserver-nb\") pod \"8d812b38-ac4b-4262-8642-bfe5c2b19222\" (UID: \"8d812b38-ac4b-4262-8642-bfe5c2b19222\") " Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.647741 4675 generic.go:334] "Generic (PLEG): container finished" podID="8d812b38-ac4b-4262-8642-bfe5c2b19222" containerID="aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca" exitCode=0 Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.648276 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" event={"ID":"8d812b38-ac4b-4262-8642-bfe5c2b19222","Type":"ContainerDied","Data":"aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca"} Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.648364 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" event={"ID":"8d812b38-ac4b-4262-8642-bfe5c2b19222","Type":"ContainerDied","Data":"dfcff063b5976329ba022b96169f20d9de2659998b2804e669f7bb315134a2ba"} Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.648402 4675 scope.go:117] "RemoveContainer" containerID="aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.649416 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-ksc5h" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.665344 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d812b38-ac4b-4262-8642-bfe5c2b19222-kube-api-access-7q4hv" (OuterVolumeSpecName: "kube-api-access-7q4hv") pod "8d812b38-ac4b-4262-8642-bfe5c2b19222" (UID: "8d812b38-ac4b-4262-8642-bfe5c2b19222"). InnerVolumeSpecName "kube-api-access-7q4hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.754085 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q4hv\" (UniqueName: \"kubernetes.io/projected/8d812b38-ac4b-4262-8642-bfe5c2b19222-kube-api-access-7q4hv\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.757844 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8d812b38-ac4b-4262-8642-bfe5c2b19222" (UID: "8d812b38-ac4b-4262-8642-bfe5c2b19222"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.770479 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8d812b38-ac4b-4262-8642-bfe5c2b19222" (UID: "8d812b38-ac4b-4262-8642-bfe5c2b19222"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.800876 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8d812b38-ac4b-4262-8642-bfe5c2b19222" (UID: "8d812b38-ac4b-4262-8642-bfe5c2b19222"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.808305 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-config" (OuterVolumeSpecName: "config") pod "8d812b38-ac4b-4262-8642-bfe5c2b19222" (UID: "8d812b38-ac4b-4262-8642-bfe5c2b19222"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.815039 4675 scope.go:117] "RemoveContainer" containerID="25af74ea6c8be371ccb804839bdc7de25eabc72168a049607ced6b4f9806c578" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.835268 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8d812b38-ac4b-4262-8642-bfe5c2b19222" (UID: "8d812b38-ac4b-4262-8642-bfe5c2b19222"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.840303 4675 scope.go:117] "RemoveContainer" containerID="aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca" Nov 21 14:01:23 crc kubenswrapper[4675]: E1121 14:01:23.844400 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca\": container with ID starting with aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca not found: ID does not exist" containerID="aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.844459 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca"} err="failed to get container status \"aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca\": rpc error: code = NotFound desc = could not find container \"aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca\": container with ID starting with aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca not found: ID does not exist" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.844493 4675 scope.go:117] "RemoveContainer" containerID="25af74ea6c8be371ccb804839bdc7de25eabc72168a049607ced6b4f9806c578" Nov 21 14:01:23 crc kubenswrapper[4675]: E1121 14:01:23.844983 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25af74ea6c8be371ccb804839bdc7de25eabc72168a049607ced6b4f9806c578\": container with ID starting with 25af74ea6c8be371ccb804839bdc7de25eabc72168a049607ced6b4f9806c578 not found: ID does not exist" containerID="25af74ea6c8be371ccb804839bdc7de25eabc72168a049607ced6b4f9806c578" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.845012 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25af74ea6c8be371ccb804839bdc7de25eabc72168a049607ced6b4f9806c578"} err="failed to get container status \"25af74ea6c8be371ccb804839bdc7de25eabc72168a049607ced6b4f9806c578\": rpc error: code = NotFound desc = could not find container \"25af74ea6c8be371ccb804839bdc7de25eabc72168a049607ced6b4f9806c578\": container with ID starting with 25af74ea6c8be371ccb804839bdc7de25eabc72168a049607ced6b4f9806c578 not found: ID does not exist" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.860825 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-config\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.860865 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.860877 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.860890 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.860905 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d812b38-ac4b-4262-8642-bfe5c2b19222-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:23 crc kubenswrapper[4675]: I1121 14:01:23.990318 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-ksc5h"] Nov 21 14:01:24 crc kubenswrapper[4675]: I1121 14:01:24.001782 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-ksc5h"] Nov 21 14:01:24 crc kubenswrapper[4675]: I1121 14:01:24.862829 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d812b38-ac4b-4262-8642-bfe5c2b19222" path="/var/lib/kubelet/pods/8d812b38-ac4b-4262-8642-bfe5c2b19222/volumes" Nov 21 14:01:25 crc kubenswrapper[4675]: I1121 14:01:25.126650 4675 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podf0093acc-562a-48c9-b1d1-bde5cdb129be"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podf0093acc-562a-48c9-b1d1-bde5cdb129be] : Timed out while waiting for systemd to remove kubepods-burstable-podf0093acc_562a_48c9_b1d1_bde5cdb129be.slice" Nov 21 14:01:25 crc kubenswrapper[4675]: I1121 14:01:25.674346 4675 generic.go:334] "Generic (PLEG): container finished" podID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerID="513fdb6b46f2d9df30ce94a8db3dcb87d07114073ddad670f8d80b40c9c07d5c" exitCode=0 Nov 21 14:01:25 crc kubenswrapper[4675]: I1121 14:01:25.674401 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05fdf4e-c229-40af-aafe-55dd5beb6cac","Type":"ContainerDied","Data":"513fdb6b46f2d9df30ce94a8db3dcb87d07114073ddad670f8d80b40c9c07d5c"} Nov 21 14:01:27 crc kubenswrapper[4675]: I1121 14:01:27.702606 4675 generic.go:334] "Generic (PLEG): container finished" podID="018c03cb-3abb-4ad7-b496-fb3d440d3ec3" containerID="74da10aae6a67f6c1367017d6d2f1b44d9464b443b948804115e87fcfd349340" exitCode=0 Nov 21 14:01:27 crc kubenswrapper[4675]: I1121 14:01:27.702715 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bxrvb" event={"ID":"018c03cb-3abb-4ad7-b496-fb3d440d3ec3","Type":"ContainerDied","Data":"74da10aae6a67f6c1367017d6d2f1b44d9464b443b948804115e87fcfd349340"} Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.122665 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.195353 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-config-data\") pod \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.195517 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-scripts\") pod \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.195553 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8dnt\" (UniqueName: \"kubernetes.io/projected/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-kube-api-access-n8dnt\") pod \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.195606 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-combined-ca-bundle\") pod \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\" (UID: \"018c03cb-3abb-4ad7-b496-fb3d440d3ec3\") " Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.201970 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-scripts" (OuterVolumeSpecName: "scripts") pod "018c03cb-3abb-4ad7-b496-fb3d440d3ec3" (UID: "018c03cb-3abb-4ad7-b496-fb3d440d3ec3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.202773 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-kube-api-access-n8dnt" (OuterVolumeSpecName: "kube-api-access-n8dnt") pod "018c03cb-3abb-4ad7-b496-fb3d440d3ec3" (UID: "018c03cb-3abb-4ad7-b496-fb3d440d3ec3"). InnerVolumeSpecName "kube-api-access-n8dnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.228692 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-config-data" (OuterVolumeSpecName: "config-data") pod "018c03cb-3abb-4ad7-b496-fb3d440d3ec3" (UID: "018c03cb-3abb-4ad7-b496-fb3d440d3ec3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.228991 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "018c03cb-3abb-4ad7-b496-fb3d440d3ec3" (UID: "018c03cb-3abb-4ad7-b496-fb3d440d3ec3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.298506 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.298541 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.298550 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8dnt\" (UniqueName: \"kubernetes.io/projected/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-kube-api-access-n8dnt\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.298560 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018c03cb-3abb-4ad7-b496-fb3d440d3ec3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.723206 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bxrvb" event={"ID":"018c03cb-3abb-4ad7-b496-fb3d440d3ec3","Type":"ContainerDied","Data":"d2bb1ccd26fbbc4cfa73b85b926966b92383d256017884633da563ab4561e182"} Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.723252 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2bb1ccd26fbbc4cfa73b85b926966b92383d256017884633da563ab4561e182" Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.723262 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bxrvb" Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.860654 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.860718 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.902312 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.920308 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.920839 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a8bdc914-dba8-4fcf-ba94-31eff03448cb" containerName="nova-scheduler-scheduler" containerID="cri-o://956ea0c634d63ec2d9912242297b1bf2bd8ec367713f16790313c86de4e3528c" gracePeriod=30 Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.939623 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.939901 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" containerName="nova-metadata-log" containerID="cri-o://21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3" gracePeriod=30 Nov 21 14:01:29 crc kubenswrapper[4675]: I1121 14:01:29.939957 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" containerName="nova-metadata-metadata" containerID="cri-o://34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139" gracePeriod=30 Nov 21 14:01:30 crc kubenswrapper[4675]: I1121 14:01:30.735734 4675 generic.go:334] "Generic (PLEG): container finished" podID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" containerID="21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3" exitCode=143 Nov 21 14:01:30 crc kubenswrapper[4675]: I1121 14:01:30.735816 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c6e10d-66b8-4566-80d2-ed0ce8b08e64","Type":"ContainerDied","Data":"21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3"} Nov 21 14:01:30 crc kubenswrapper[4675]: I1121 14:01:30.736244 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="19c809ef-4117-44b8-a7c0-8b5f0a60dd51" containerName="nova-api-log" containerID="cri-o://666b2d561aec9327233b304db13b7983ae66f544ba65f987332615f252022d6d" gracePeriod=30 Nov 21 14:01:30 crc kubenswrapper[4675]: I1121 14:01:30.736594 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="19c809ef-4117-44b8-a7c0-8b5f0a60dd51" containerName="nova-api-api" containerID="cri-o://72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d" gracePeriod=30 Nov 21 14:01:30 crc kubenswrapper[4675]: I1121 14:01:30.740561 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="19c809ef-4117-44b8-a7c0-8b5f0a60dd51" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.251:8774/\": EOF" Nov 21 14:01:30 crc kubenswrapper[4675]: I1121 14:01:30.743388 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="19c809ef-4117-44b8-a7c0-8b5f0a60dd51" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.251:8774/\": EOF" Nov 21 14:01:31 crc kubenswrapper[4675]: E1121 14:01:31.122225 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="956ea0c634d63ec2d9912242297b1bf2bd8ec367713f16790313c86de4e3528c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 14:01:31 crc kubenswrapper[4675]: E1121 14:01:31.123373 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="956ea0c634d63ec2d9912242297b1bf2bd8ec367713f16790313c86de4e3528c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 14:01:31 crc kubenswrapper[4675]: E1121 14:01:31.124469 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="956ea0c634d63ec2d9912242297b1bf2bd8ec367713f16790313c86de4e3528c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 14:01:31 crc kubenswrapper[4675]: E1121 14:01:31.124544 4675 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a8bdc914-dba8-4fcf-ba94-31eff03448cb" containerName="nova-scheduler-scheduler" Nov 21 14:01:31 crc kubenswrapper[4675]: W1121 14:01:31.246980 4675 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3d2866a_84e9_4475_bab0_69d4aaa9656f.slice/crio-conmon-e7202bd477c8e47c7c513fb9200962aee7703b5d5fcc982f6ff91e1f18aedfd9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3d2866a_84e9_4475_bab0_69d4aaa9656f.slice/crio-conmon-e7202bd477c8e47c7c513fb9200962aee7703b5d5fcc982f6ff91e1f18aedfd9.scope: no such file or directory Nov 21 14:01:31 crc kubenswrapper[4675]: W1121 14:01:31.247081 4675 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3d2866a_84e9_4475_bab0_69d4aaa9656f.slice/crio-e7202bd477c8e47c7c513fb9200962aee7703b5d5fcc982f6ff91e1f18aedfd9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3d2866a_84e9_4475_bab0_69d4aaa9656f.slice/crio-e7202bd477c8e47c7c513fb9200962aee7703b5d5fcc982f6ff91e1f18aedfd9.scope: no such file or directory Nov 21 14:01:31 crc kubenswrapper[4675]: W1121 14:01:31.249329 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91766feb_a370_4ea5_8bdb_a7197d87c4de.slice/crio-f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5.scope WatchSource:0}: Error finding container f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5: Status 404 returned error can't find the container with id f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5 Nov 21 14:01:31 crc kubenswrapper[4675]: E1121 14:01:31.359186 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91766feb_a370_4ea5_8bdb_a7197d87c4de.slice/crio-conmon-0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91766feb_a370_4ea5_8bdb_a7197d87c4de.slice/crio-conmon-f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5.scope\": RecentStats: unable to find data in memory cache]" Nov 21 14:01:31 crc kubenswrapper[4675]: E1121 14:01:31.359314 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91766feb_a370_4ea5_8bdb_a7197d87c4de.slice/crio-conmon-f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5.scope\": RecentStats: unable to find data in memory cache]" Nov 21 14:01:31 crc kubenswrapper[4675]: E1121 14:01:31.359359 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91766feb_a370_4ea5_8bdb_a7197d87c4de.slice/crio-conmon-f7901d53f2579fd4419be936b1468cfba3f631237e31c88a51b10565eff9b9e5.scope\": RecentStats: unable to find data in memory cache]" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.738353 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.751676 4675 generic.go:334] "Generic (PLEG): container finished" podID="19c809ef-4117-44b8-a7c0-8b5f0a60dd51" containerID="666b2d561aec9327233b304db13b7983ae66f544ba65f987332615f252022d6d" exitCode=143 Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.751698 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-config-data\") pod \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.752012 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-scripts\") pod \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.752122 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-combined-ca-bundle\") pod \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.752408 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptlxg\" (UniqueName: \"kubernetes.io/projected/fb1a7dc1-fee4-4671-9117-d653c3873ea8-kube-api-access-ptlxg\") pod \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\" (UID: \"fb1a7dc1-fee4-4671-9117-d653c3873ea8\") " Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.759954 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19c809ef-4117-44b8-a7c0-8b5f0a60dd51","Type":"ContainerDied","Data":"666b2d561aec9327233b304db13b7983ae66f544ba65f987332615f252022d6d"} Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.764673 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-scripts" (OuterVolumeSpecName: "scripts") pod "fb1a7dc1-fee4-4671-9117-d653c3873ea8" (UID: "fb1a7dc1-fee4-4671-9117-d653c3873ea8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.765306 4675 generic.go:334] "Generic (PLEG): container finished" podID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerID="4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff" exitCode=137 Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.765335 4675 generic.go:334] "Generic (PLEG): container finished" podID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerID="785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565" exitCode=137 Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.765357 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fb1a7dc1-fee4-4671-9117-d653c3873ea8","Type":"ContainerDied","Data":"4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff"} Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.765618 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fb1a7dc1-fee4-4671-9117-d653c3873ea8","Type":"ContainerDied","Data":"785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565"} Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.765639 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fb1a7dc1-fee4-4671-9117-d653c3873ea8","Type":"ContainerDied","Data":"aaa900d9f3f0e310d32205ca1d577c925a7864d94e885e36cba00ea3154395e9"} Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.765655 4675 scope.go:117] "RemoveContainer" containerID="4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.765752 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb1a7dc1-fee4-4671-9117-d653c3873ea8-kube-api-access-ptlxg" (OuterVolumeSpecName: "kube-api-access-ptlxg") pod "fb1a7dc1-fee4-4671-9117-d653c3873ea8" (UID: "fb1a7dc1-fee4-4671-9117-d653c3873ea8"). InnerVolumeSpecName "kube-api-access-ptlxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.765866 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.852761 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:01:31 crc kubenswrapper[4675]: E1121 14:01:31.853615 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.865273 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.865309 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptlxg\" (UniqueName: \"kubernetes.io/projected/fb1a7dc1-fee4-4671-9117-d653c3873ea8-kube-api-access-ptlxg\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.927255 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-config-data" (OuterVolumeSpecName: "config-data") pod "fb1a7dc1-fee4-4671-9117-d653c3873ea8" (UID: "fb1a7dc1-fee4-4671-9117-d653c3873ea8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.932213 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb1a7dc1-fee4-4671-9117-d653c3873ea8" (UID: "fb1a7dc1-fee4-4671-9117-d653c3873ea8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.937219 4675 scope.go:117] "RemoveContainer" containerID="785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.961108 4675 scope.go:117] "RemoveContainer" containerID="04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.970703 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.970730 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1a7dc1-fee4-4671-9117-d653c3873ea8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:31 crc kubenswrapper[4675]: I1121 14:01:31.985542 4675 scope.go:117] "RemoveContainer" containerID="936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.014090 4675 scope.go:117] "RemoveContainer" containerID="4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff" Nov 21 14:01:32 crc kubenswrapper[4675]: E1121 14:01:32.014778 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff\": container with ID starting with 4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff not found: ID does not exist" containerID="4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.014874 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff"} err="failed to get container status \"4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff\": rpc error: code = NotFound desc = could not find container \"4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff\": container with ID starting with 4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff not found: ID does not exist" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.014959 4675 scope.go:117] "RemoveContainer" containerID="785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565" Nov 21 14:01:32 crc kubenswrapper[4675]: E1121 14:01:32.017305 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565\": container with ID starting with 785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565 not found: ID does not exist" containerID="785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.017365 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565"} err="failed to get container status \"785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565\": rpc error: code = NotFound desc = could not find container \"785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565\": container with ID starting with 785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565 not found: ID does not exist" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.017404 4675 scope.go:117] "RemoveContainer" containerID="04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068" Nov 21 14:01:32 crc kubenswrapper[4675]: E1121 14:01:32.017936 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068\": container with ID starting with 04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068 not found: ID does not exist" containerID="04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.017977 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068"} err="failed to get container status \"04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068\": rpc error: code = NotFound desc = could not find container \"04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068\": container with ID starting with 04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068 not found: ID does not exist" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.017999 4675 scope.go:117] "RemoveContainer" containerID="936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e" Nov 21 14:01:32 crc kubenswrapper[4675]: E1121 14:01:32.018665 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e\": container with ID starting with 936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e not found: ID does not exist" containerID="936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.018696 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e"} err="failed to get container status \"936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e\": rpc error: code = NotFound desc = could not find container \"936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e\": container with ID starting with 936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e not found: ID does not exist" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.018717 4675 scope.go:117] "RemoveContainer" containerID="4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.019463 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff"} err="failed to get container status \"4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff\": rpc error: code = NotFound desc = could not find container \"4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff\": container with ID starting with 4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff not found: ID does not exist" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.019493 4675 scope.go:117] "RemoveContainer" containerID="785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.020344 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565"} err="failed to get container status \"785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565\": rpc error: code = NotFound desc = could not find container \"785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565\": container with ID starting with 785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565 not found: ID does not exist" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.020376 4675 scope.go:117] "RemoveContainer" containerID="04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.022240 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068"} err="failed to get container status \"04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068\": rpc error: code = NotFound desc = could not find container \"04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068\": container with ID starting with 04fe7463498cf3cc19f2bacc7d2246b2816a56838e18616ae2db359ca927a068 not found: ID does not exist" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.022276 4675 scope.go:117] "RemoveContainer" containerID="936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.022568 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e"} err="failed to get container status \"936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e\": rpc error: code = NotFound desc = could not find container \"936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e\": container with ID starting with 936ff1fe6478e5322e9fc26361fb90ac7d0f97e0714eb284b1363ce8c86ca87e not found: ID does not exist" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.100230 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.111182 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.155165 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 21 14:01:32 crc kubenswrapper[4675]: E1121 14:01:32.156452 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-evaluator" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.156482 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-evaluator" Nov 21 14:01:32 crc kubenswrapper[4675]: E1121 14:01:32.156542 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-api" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.156553 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-api" Nov 21 14:01:32 crc kubenswrapper[4675]: E1121 14:01:32.156583 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-notifier" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.156592 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-notifier" Nov 21 14:01:32 crc kubenswrapper[4675]: E1121 14:01:32.156620 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d812b38-ac4b-4262-8642-bfe5c2b19222" containerName="dnsmasq-dns" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.156628 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d812b38-ac4b-4262-8642-bfe5c2b19222" containerName="dnsmasq-dns" Nov 21 14:01:32 crc kubenswrapper[4675]: E1121 14:01:32.156663 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018c03cb-3abb-4ad7-b496-fb3d440d3ec3" containerName="nova-manage" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.156672 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="018c03cb-3abb-4ad7-b496-fb3d440d3ec3" containerName="nova-manage" Nov 21 14:01:32 crc kubenswrapper[4675]: E1121 14:01:32.156695 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d812b38-ac4b-4262-8642-bfe5c2b19222" containerName="init" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.156705 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d812b38-ac4b-4262-8642-bfe5c2b19222" containerName="init" Nov 21 14:01:32 crc kubenswrapper[4675]: E1121 14:01:32.156730 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-listener" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.156738 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-listener" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.157428 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-listener" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.157461 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d812b38-ac4b-4262-8642-bfe5c2b19222" containerName="dnsmasq-dns" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.157501 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="018c03cb-3abb-4ad7-b496-fb3d440d3ec3" containerName="nova-manage" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.157511 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-api" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.157539 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-notifier" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.157563 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" containerName="aodh-evaluator" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.175349 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.180664 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.182013 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.182162 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.183060 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.183233 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-swnjz" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.183371 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.278109 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-internal-tls-certs\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.278422 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-config-data\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.278534 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6r8k\" (UniqueName: \"kubernetes.io/projected/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-kube-api-access-t6r8k\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.278773 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-public-tls-certs\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.279011 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-scripts\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.279176 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.381385 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-internal-tls-certs\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.382218 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-config-data\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.382253 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6r8k\" (UniqueName: \"kubernetes.io/projected/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-kube-api-access-t6r8k\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.384243 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-public-tls-certs\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.384443 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-scripts\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.384844 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.386793 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-config-data\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.387456 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-internal-tls-certs\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.388600 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.389621 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-public-tls-certs\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.391506 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-scripts\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.397483 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6r8k\" (UniqueName: \"kubernetes.io/projected/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-kube-api-access-t6r8k\") pod \"aodh-0\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.512131 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 14:01:32 crc kubenswrapper[4675]: W1121 14:01:32.530467 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91766feb_a370_4ea5_8bdb_a7197d87c4de.slice/crio-0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d.scope WatchSource:0}: Error finding container 0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d: Status 404 returned error can't find the container with id 0667e36a7882160c83915c1233c6730dc2c19accb33505738e5958aea99a0a4d Nov 21 14:01:32 crc kubenswrapper[4675]: W1121 14:01:32.532983 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91766feb_a370_4ea5_8bdb_a7197d87c4de.slice/crio-6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f.scope WatchSource:0}: Error finding container 6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f: Status 404 returned error can't find the container with id 6033c577a97d6661f5013083177186577fe60c815ed9e34ab3f81896eecdbe6f Nov 21 14:01:32 crc kubenswrapper[4675]: W1121 14:01:32.547354 4675 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod018c03cb_3abb_4ad7_b496_fb3d440d3ec3.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod018c03cb_3abb_4ad7_b496_fb3d440d3ec3.slice: no such file or directory Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.806955 4675 generic.go:334] "Generic (PLEG): container finished" podID="a8bdc914-dba8-4fcf-ba94-31eff03448cb" containerID="956ea0c634d63ec2d9912242297b1bf2bd8ec367713f16790313c86de4e3528c" exitCode=0 Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.807167 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a8bdc914-dba8-4fcf-ba94-31eff03448cb","Type":"ContainerDied","Data":"956ea0c634d63ec2d9912242297b1bf2bd8ec367713f16790313c86de4e3528c"} Nov 21 14:01:32 crc kubenswrapper[4675]: I1121 14:01:32.873307 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb1a7dc1-fee4-4671-9117-d653c3873ea8" path="/var/lib/kubelet/pods/fb1a7dc1-fee4-4671-9117-d653c3873ea8/volumes" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.004968 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.080454 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.240:8775/\": read tcp 10.217.0.2:40414->10.217.0.240:8775: read: connection reset by peer" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.080938 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.240:8775/\": read tcp 10.217.0.2:40412->10.217.0.240:8775: read: connection reset by peer" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.098813 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bdc914-dba8-4fcf-ba94-31eff03448cb-config-data\") pod \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\" (UID: \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\") " Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.099116 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bdc914-dba8-4fcf-ba94-31eff03448cb-combined-ca-bundle\") pod \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\" (UID: \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\") " Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.099253 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6zx9\" (UniqueName: \"kubernetes.io/projected/a8bdc914-dba8-4fcf-ba94-31eff03448cb-kube-api-access-l6zx9\") pod \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\" (UID: \"a8bdc914-dba8-4fcf-ba94-31eff03448cb\") " Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.118337 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8bdc914-dba8-4fcf-ba94-31eff03448cb-kube-api-access-l6zx9" (OuterVolumeSpecName: "kube-api-access-l6zx9") pod "a8bdc914-dba8-4fcf-ba94-31eff03448cb" (UID: "a8bdc914-dba8-4fcf-ba94-31eff03448cb"). InnerVolumeSpecName "kube-api-access-l6zx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.135758 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8bdc914-dba8-4fcf-ba94-31eff03448cb-config-data" (OuterVolumeSpecName: "config-data") pod "a8bdc914-dba8-4fcf-ba94-31eff03448cb" (UID: "a8bdc914-dba8-4fcf-ba94-31eff03448cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.137312 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8bdc914-dba8-4fcf-ba94-31eff03448cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8bdc914-dba8-4fcf-ba94-31eff03448cb" (UID: "a8bdc914-dba8-4fcf-ba94-31eff03448cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.155401 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.201995 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bdc914-dba8-4fcf-ba94-31eff03448cb-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.202037 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bdc914-dba8-4fcf-ba94-31eff03448cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.202054 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6zx9\" (UniqueName: \"kubernetes.io/projected/a8bdc914-dba8-4fcf-ba94-31eff03448cb-kube-api-access-l6zx9\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.589497 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.622100 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rsns\" (UniqueName: \"kubernetes.io/projected/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-kube-api-access-7rsns\") pod \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.622332 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-nova-metadata-tls-certs\") pod \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.622488 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-combined-ca-bundle\") pod \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.622557 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-config-data\") pod \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.622647 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-logs\") pod \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\" (UID: \"78c6e10d-66b8-4566-80d2-ed0ce8b08e64\") " Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.623787 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-logs" (OuterVolumeSpecName: "logs") pod "78c6e10d-66b8-4566-80d2-ed0ce8b08e64" (UID: "78c6e10d-66b8-4566-80d2-ed0ce8b08e64"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.634447 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-kube-api-access-7rsns" (OuterVolumeSpecName: "kube-api-access-7rsns") pod "78c6e10d-66b8-4566-80d2-ed0ce8b08e64" (UID: "78c6e10d-66b8-4566-80d2-ed0ce8b08e64"). InnerVolumeSpecName "kube-api-access-7rsns". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.681815 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78c6e10d-66b8-4566-80d2-ed0ce8b08e64" (UID: "78c6e10d-66b8-4566-80d2-ed0ce8b08e64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.711548 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-config-data" (OuterVolumeSpecName: "config-data") pod "78c6e10d-66b8-4566-80d2-ed0ce8b08e64" (UID: "78c6e10d-66b8-4566-80d2-ed0ce8b08e64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.725252 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.725283 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.725292 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-logs\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.725301 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rsns\" (UniqueName: \"kubernetes.io/projected/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-kube-api-access-7rsns\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.726883 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "78c6e10d-66b8-4566-80d2-ed0ce8b08e64" (UID: "78c6e10d-66b8-4566-80d2-ed0ce8b08e64"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.826287 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c","Type":"ContainerStarted","Data":"9470ace0c5ee6d1a4dc62749264a439ae3938f9875dcee2b5d5d88597d28d667"} Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.827171 4675 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c6e10d-66b8-4566-80d2-ed0ce8b08e64-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.829423 4675 generic.go:334] "Generic (PLEG): container finished" podID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" containerID="34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139" exitCode=0 Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.829504 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c6e10d-66b8-4566-80d2-ed0ce8b08e64","Type":"ContainerDied","Data":"34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139"} Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.829517 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.829544 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c6e10d-66b8-4566-80d2-ed0ce8b08e64","Type":"ContainerDied","Data":"8e3a9b39dd1686be7fb65659e636e0ddab1461cb408df00f6203491fab9d0942"} Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.829568 4675 scope.go:117] "RemoveContainer" containerID="34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.832770 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a8bdc914-dba8-4fcf-ba94-31eff03448cb","Type":"ContainerDied","Data":"5ec584e061baa6705c465e8e8f62e308697d76747d6c12efd0d757a18fc8f45b"} Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.832856 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.872274 4675 scope.go:117] "RemoveContainer" containerID="21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.932081 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.962241 4675 scope.go:117] "RemoveContainer" containerID="34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139" Nov 21 14:01:33 crc kubenswrapper[4675]: E1121 14:01:33.971362 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139\": container with ID starting with 34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139 not found: ID does not exist" containerID="34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.971416 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139"} err="failed to get container status \"34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139\": rpc error: code = NotFound desc = could not find container \"34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139\": container with ID starting with 34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139 not found: ID does not exist" Nov 21 14:01:33 crc kubenswrapper[4675]: I1121 14:01:33.971451 4675 scope.go:117] "RemoveContainer" containerID="21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3" Nov 21 14:01:34 crc kubenswrapper[4675]: E1121 14:01:33.980981 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3\": container with ID starting with 21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3 not found: ID does not exist" containerID="21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:33.981032 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3"} err="failed to get container status \"21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3\": rpc error: code = NotFound desc = could not find container \"21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3\": container with ID starting with 21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3 not found: ID does not exist" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:33.981062 4675 scope.go:117] "RemoveContainer" containerID="956ea0c634d63ec2d9912242297b1bf2bd8ec367713f16790313c86de4e3528c" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:33.981191 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.019124 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.044579 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.061681 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:01:34 crc kubenswrapper[4675]: E1121 14:01:34.062303 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" containerName="nova-metadata-log" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.062325 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" containerName="nova-metadata-log" Nov 21 14:01:34 crc kubenswrapper[4675]: E1121 14:01:34.062399 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" containerName="nova-metadata-metadata" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.062410 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" containerName="nova-metadata-metadata" Nov 21 14:01:34 crc kubenswrapper[4675]: E1121 14:01:34.062439 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bdc914-dba8-4fcf-ba94-31eff03448cb" containerName="nova-scheduler-scheduler" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.062448 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bdc914-dba8-4fcf-ba94-31eff03448cb" containerName="nova-scheduler-scheduler" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.062745 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" containerName="nova-metadata-log" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.062769 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" containerName="nova-metadata-metadata" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.062800 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bdc914-dba8-4fcf-ba94-31eff03448cb" containerName="nova-scheduler-scheduler" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.063817 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.067685 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.074866 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.100500 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.103726 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.106811 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.108037 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.126571 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.140271 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a675b127-f342-4527-b0f1-9e668fcf5ede-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a675b127-f342-4527-b0f1-9e668fcf5ede\") " pod="openstack/nova-scheduler-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.140336 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2e69762-ea6c-4d7a-a407-8373c1c7b734-logs\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.140370 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e69762-ea6c-4d7a-a407-8373c1c7b734-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.140429 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd7zm\" (UniqueName: \"kubernetes.io/projected/f2e69762-ea6c-4d7a-a407-8373c1c7b734-kube-api-access-gd7zm\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.140456 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e69762-ea6c-4d7a-a407-8373c1c7b734-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.140475 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e69762-ea6c-4d7a-a407-8373c1c7b734-config-data\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.140510 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p4xh\" (UniqueName: \"kubernetes.io/projected/a675b127-f342-4527-b0f1-9e668fcf5ede-kube-api-access-4p4xh\") pod \"nova-scheduler-0\" (UID: \"a675b127-f342-4527-b0f1-9e668fcf5ede\") " pod="openstack/nova-scheduler-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.140560 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a675b127-f342-4527-b0f1-9e668fcf5ede-config-data\") pod \"nova-scheduler-0\" (UID: \"a675b127-f342-4527-b0f1-9e668fcf5ede\") " pod="openstack/nova-scheduler-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.242128 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd7zm\" (UniqueName: \"kubernetes.io/projected/f2e69762-ea6c-4d7a-a407-8373c1c7b734-kube-api-access-gd7zm\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.242181 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e69762-ea6c-4d7a-a407-8373c1c7b734-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.242207 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e69762-ea6c-4d7a-a407-8373c1c7b734-config-data\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.242258 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p4xh\" (UniqueName: \"kubernetes.io/projected/a675b127-f342-4527-b0f1-9e668fcf5ede-kube-api-access-4p4xh\") pod \"nova-scheduler-0\" (UID: \"a675b127-f342-4527-b0f1-9e668fcf5ede\") " pod="openstack/nova-scheduler-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.242329 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a675b127-f342-4527-b0f1-9e668fcf5ede-config-data\") pod \"nova-scheduler-0\" (UID: \"a675b127-f342-4527-b0f1-9e668fcf5ede\") " pod="openstack/nova-scheduler-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.242410 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a675b127-f342-4527-b0f1-9e668fcf5ede-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a675b127-f342-4527-b0f1-9e668fcf5ede\") " pod="openstack/nova-scheduler-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.242464 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2e69762-ea6c-4d7a-a407-8373c1c7b734-logs\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.242508 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e69762-ea6c-4d7a-a407-8373c1c7b734-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.244587 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2e69762-ea6c-4d7a-a407-8373c1c7b734-logs\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.248183 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a675b127-f342-4527-b0f1-9e668fcf5ede-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a675b127-f342-4527-b0f1-9e668fcf5ede\") " pod="openstack/nova-scheduler-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.257628 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e69762-ea6c-4d7a-a407-8373c1c7b734-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.258284 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e69762-ea6c-4d7a-a407-8373c1c7b734-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.260100 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e69762-ea6c-4d7a-a407-8373c1c7b734-config-data\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.260199 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a675b127-f342-4527-b0f1-9e668fcf5ede-config-data\") pod \"nova-scheduler-0\" (UID: \"a675b127-f342-4527-b0f1-9e668fcf5ede\") " pod="openstack/nova-scheduler-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.260841 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd7zm\" (UniqueName: \"kubernetes.io/projected/f2e69762-ea6c-4d7a-a407-8373c1c7b734-kube-api-access-gd7zm\") pod \"nova-metadata-0\" (UID: \"f2e69762-ea6c-4d7a-a407-8373c1c7b734\") " pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.263189 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p4xh\" (UniqueName: \"kubernetes.io/projected/a675b127-f342-4527-b0f1-9e668fcf5ede-kube-api-access-4p4xh\") pod \"nova-scheduler-0\" (UID: \"a675b127-f342-4527-b0f1-9e668fcf5ede\") " pod="openstack/nova-scheduler-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.397872 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.435737 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.866884 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c6e10d-66b8-4566-80d2-ed0ce8b08e64" path="/var/lib/kubelet/pods/78c6e10d-66b8-4566-80d2-ed0ce8b08e64/volumes" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.868102 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8bdc914-dba8-4fcf-ba94-31eff03448cb" path="/var/lib/kubelet/pods/a8bdc914-dba8-4fcf-ba94-31eff03448cb/volumes" Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.868692 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c","Type":"ContainerStarted","Data":"2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1"} Nov 21 14:01:34 crc kubenswrapper[4675]: I1121 14:01:34.910422 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 14:01:35 crc kubenswrapper[4675]: I1121 14:01:35.018531 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 14:01:35 crc kubenswrapper[4675]: I1121 14:01:35.870720 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c","Type":"ContainerStarted","Data":"be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305"} Nov 21 14:01:35 crc kubenswrapper[4675]: I1121 14:01:35.871487 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c","Type":"ContainerStarted","Data":"78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4"} Nov 21 14:01:35 crc kubenswrapper[4675]: I1121 14:01:35.874475 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f2e69762-ea6c-4d7a-a407-8373c1c7b734","Type":"ContainerStarted","Data":"4dadd841585454d921a70196de42093a6a2ed170f606dd199dcd3c79ccf9be81"} Nov 21 14:01:35 crc kubenswrapper[4675]: I1121 14:01:35.874528 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f2e69762-ea6c-4d7a-a407-8373c1c7b734","Type":"ContainerStarted","Data":"2ad41164b53e10b525d54e761b177669046d1e40a4a74c521a50585334630cd3"} Nov 21 14:01:35 crc kubenswrapper[4675]: I1121 14:01:35.874545 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f2e69762-ea6c-4d7a-a407-8373c1c7b734","Type":"ContainerStarted","Data":"b0ee33065a0ca127cadc38480b3ee5b0bbc267a0f9977b9e56837629b03ba7d4"} Nov 21 14:01:35 crc kubenswrapper[4675]: I1121 14:01:35.878521 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a675b127-f342-4527-b0f1-9e668fcf5ede","Type":"ContainerStarted","Data":"c5866265bcf6fbbeef241d828422f4cfdde6794780cbbaffd374f74fa2b10e91"} Nov 21 14:01:35 crc kubenswrapper[4675]: I1121 14:01:35.878567 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a675b127-f342-4527-b0f1-9e668fcf5ede","Type":"ContainerStarted","Data":"f6a3afe62d35f1a264f638b66d65ab195e0957c89155294730236498df93b228"} Nov 21 14:01:35 crc kubenswrapper[4675]: I1121 14:01:35.909328 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.909301361 podStartE2EDuration="2.909301361s" podCreationTimestamp="2025-11-21 14:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:01:35.899681601 +0000 UTC m=+1772.626096328" watchObservedRunningTime="2025-11-21 14:01:35.909301361 +0000 UTC m=+1772.635716098" Nov 21 14:01:35 crc kubenswrapper[4675]: I1121 14:01:35.920598 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.920577133 podStartE2EDuration="2.920577133s" podCreationTimestamp="2025-11-21 14:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:01:35.915341642 +0000 UTC m=+1772.641756369" watchObservedRunningTime="2025-11-21 14:01:35.920577133 +0000 UTC m=+1772.646991860" Nov 21 14:01:36 crc kubenswrapper[4675]: I1121 14:01:36.826564 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 14:01:36 crc kubenswrapper[4675]: I1121 14:01:36.892435 4675 generic.go:334] "Generic (PLEG): container finished" podID="19c809ef-4117-44b8-a7c0-8b5f0a60dd51" containerID="72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d" exitCode=0 Nov 21 14:01:36 crc kubenswrapper[4675]: I1121 14:01:36.892531 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19c809ef-4117-44b8-a7c0-8b5f0a60dd51","Type":"ContainerDied","Data":"72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d"} Nov 21 14:01:36 crc kubenswrapper[4675]: I1121 14:01:36.892543 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 14:01:36 crc kubenswrapper[4675]: I1121 14:01:36.892563 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"19c809ef-4117-44b8-a7c0-8b5f0a60dd51","Type":"ContainerDied","Data":"706c73891fdc147c59b9b2cd72b77a75381e8c70a8819fab6b024c5f66436298"} Nov 21 14:01:36 crc kubenswrapper[4675]: I1121 14:01:36.892584 4675 scope.go:117] "RemoveContainer" containerID="72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d" Nov 21 14:01:36 crc kubenswrapper[4675]: I1121 14:01:36.901946 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c","Type":"ContainerStarted","Data":"fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875"} Nov 21 14:01:36 crc kubenswrapper[4675]: I1121 14:01:36.916327 4675 scope.go:117] "RemoveContainer" containerID="666b2d561aec9327233b304db13b7983ae66f544ba65f987332615f252022d6d" Nov 21 14:01:36 crc kubenswrapper[4675]: E1121 14:01:36.952720 4675 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f2ce385f51f84381a2c84395d51aae1d8c9cbb6a857f51b942f255cc9b5f353f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f2ce385f51f84381a2c84395d51aae1d8c9cbb6a857f51b942f255cc9b5f353f/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_dnsmasq-dns-568d7fd7cf-ksc5h_8d812b38-ac4b-4262-8642-bfe5c2b19222/dnsmasq-dns/0.log" to get inode usage: stat /var/log/pods/openstack_dnsmasq-dns-568d7fd7cf-ksc5h_8d812b38-ac4b-4262-8642-bfe5c2b19222/dnsmasq-dns/0.log: no such file or directory Nov 21 14:01:36 crc kubenswrapper[4675]: I1121 14:01:36.953316 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.670701658 podStartE2EDuration="4.953296578s" podCreationTimestamp="2025-11-21 14:01:32 +0000 UTC" firstStartedPulling="2025-11-21 14:01:33.171188266 +0000 UTC m=+1769.897603003" lastFinishedPulling="2025-11-21 14:01:36.453783196 +0000 UTC m=+1773.180197923" observedRunningTime="2025-11-21 14:01:36.930602531 +0000 UTC m=+1773.657017278" watchObservedRunningTime="2025-11-21 14:01:36.953296578 +0000 UTC m=+1773.679711305" Nov 21 14:01:36 crc kubenswrapper[4675]: I1121 14:01:36.960717 4675 scope.go:117] "RemoveContainer" containerID="72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d" Nov 21 14:01:36 crc kubenswrapper[4675]: E1121 14:01:36.961677 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d\": container with ID starting with 72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d not found: ID does not exist" containerID="72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d" Nov 21 14:01:36 crc kubenswrapper[4675]: I1121 14:01:36.961725 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d"} err="failed to get container status \"72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d\": rpc error: code = NotFound desc = could not find container \"72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d\": container with ID starting with 72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d not found: ID does not exist" Nov 21 14:01:36 crc kubenswrapper[4675]: I1121 14:01:36.961753 4675 scope.go:117] "RemoveContainer" containerID="666b2d561aec9327233b304db13b7983ae66f544ba65f987332615f252022d6d" Nov 21 14:01:36 crc kubenswrapper[4675]: E1121 14:01:36.962248 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"666b2d561aec9327233b304db13b7983ae66f544ba65f987332615f252022d6d\": container with ID starting with 666b2d561aec9327233b304db13b7983ae66f544ba65f987332615f252022d6d not found: ID does not exist" containerID="666b2d561aec9327233b304db13b7983ae66f544ba65f987332615f252022d6d" Nov 21 14:01:36 crc kubenswrapper[4675]: I1121 14:01:36.962283 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"666b2d561aec9327233b304db13b7983ae66f544ba65f987332615f252022d6d"} err="failed to get container status \"666b2d561aec9327233b304db13b7983ae66f544ba65f987332615f252022d6d\": rpc error: code = NotFound desc = could not find container \"666b2d561aec9327233b304db13b7983ae66f544ba65f987332615f252022d6d\": container with ID starting with 666b2d561aec9327233b304db13b7983ae66f544ba65f987332615f252022d6d not found: ID does not exist" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.008748 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9fc8\" (UniqueName: \"kubernetes.io/projected/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-kube-api-access-b9fc8\") pod \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.008876 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-config-data\") pod \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.009003 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-public-tls-certs\") pod \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.009092 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-internal-tls-certs\") pod \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.009234 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-combined-ca-bundle\") pod \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.009343 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-logs\") pod \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\" (UID: \"19c809ef-4117-44b8-a7c0-8b5f0a60dd51\") " Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.011596 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-logs" (OuterVolumeSpecName: "logs") pod "19c809ef-4117-44b8-a7c0-8b5f0a60dd51" (UID: "19c809ef-4117-44b8-a7c0-8b5f0a60dd51"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.040280 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-kube-api-access-b9fc8" (OuterVolumeSpecName: "kube-api-access-b9fc8") pod "19c809ef-4117-44b8-a7c0-8b5f0a60dd51" (UID: "19c809ef-4117-44b8-a7c0-8b5f0a60dd51"). InnerVolumeSpecName "kube-api-access-b9fc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.073601 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-config-data" (OuterVolumeSpecName: "config-data") pod "19c809ef-4117-44b8-a7c0-8b5f0a60dd51" (UID: "19c809ef-4117-44b8-a7c0-8b5f0a60dd51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.096644 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19c809ef-4117-44b8-a7c0-8b5f0a60dd51" (UID: "19c809ef-4117-44b8-a7c0-8b5f0a60dd51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.116409 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.116448 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-logs\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.116458 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9fc8\" (UniqueName: \"kubernetes.io/projected/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-kube-api-access-b9fc8\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.116468 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.123852 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "19c809ef-4117-44b8-a7c0-8b5f0a60dd51" (UID: "19c809ef-4117-44b8-a7c0-8b5f0a60dd51"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.143098 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "19c809ef-4117-44b8-a7c0-8b5f0a60dd51" (UID: "19c809ef-4117-44b8-a7c0-8b5f0a60dd51"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.218493 4675 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.218523 4675 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c809ef-4117-44b8-a7c0-8b5f0a60dd51-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.244257 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.275021 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.296732 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 14:01:37 crc kubenswrapper[4675]: E1121 14:01:37.302149 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c809ef-4117-44b8-a7c0-8b5f0a60dd51" containerName="nova-api-api" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.302184 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c809ef-4117-44b8-a7c0-8b5f0a60dd51" containerName="nova-api-api" Nov 21 14:01:37 crc kubenswrapper[4675]: E1121 14:01:37.302262 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c809ef-4117-44b8-a7c0-8b5f0a60dd51" containerName="nova-api-log" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.302269 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c809ef-4117-44b8-a7c0-8b5f0a60dd51" containerName="nova-api-log" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.302707 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c809ef-4117-44b8-a7c0-8b5f0a60dd51" containerName="nova-api-api" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.302744 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c809ef-4117-44b8-a7c0-8b5f0a60dd51" containerName="nova-api-log" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.308428 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.310923 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.311193 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.311797 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.350343 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.424623 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-config-data\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.424816 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-public-tls-certs\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.424886 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.425827 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.425883 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48z5q\" (UniqueName: \"kubernetes.io/projected/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-kube-api-access-48z5q\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.426173 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-logs\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.536948 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.536990 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48z5q\" (UniqueName: \"kubernetes.io/projected/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-kube-api-access-48z5q\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.537029 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-logs\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.537123 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-config-data\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.537181 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-public-tls-certs\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.537243 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.537800 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-logs\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.540426 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.542534 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-config-data\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.543739 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-public-tls-certs\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.544521 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.554913 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48z5q\" (UniqueName: \"kubernetes.io/projected/5bfa0c26-ff80-4079-aef8-6cc1a62ba554-kube-api-access-48z5q\") pod \"nova-api-0\" (UID: \"5bfa0c26-ff80-4079-aef8-6cc1a62ba554\") " pod="openstack/nova-api-0" Nov 21 14:01:37 crc kubenswrapper[4675]: I1121 14:01:37.634179 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 14:01:38 crc kubenswrapper[4675]: I1121 14:01:38.112427 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 14:01:38 crc kubenswrapper[4675]: I1121 14:01:38.862399 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19c809ef-4117-44b8-a7c0-8b5f0a60dd51" path="/var/lib/kubelet/pods/19c809ef-4117-44b8-a7c0-8b5f0a60dd51/volumes" Nov 21 14:01:38 crc kubenswrapper[4675]: I1121 14:01:38.935423 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bfa0c26-ff80-4079-aef8-6cc1a62ba554","Type":"ContainerStarted","Data":"2a184a97bebc00d569f427aec5be42e1f8275518ea5c7c2cfc339300f2ad05fb"} Nov 21 14:01:38 crc kubenswrapper[4675]: I1121 14:01:38.935470 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bfa0c26-ff80-4079-aef8-6cc1a62ba554","Type":"ContainerStarted","Data":"a66f227e775bc4d8bbbb0f38d4f5d9b8455594f5bf2dd33d2b4f1f479398de23"} Nov 21 14:01:38 crc kubenswrapper[4675]: I1121 14:01:38.935483 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bfa0c26-ff80-4079-aef8-6cc1a62ba554","Type":"ContainerStarted","Data":"b538df5ad2d6093ba975b2682713e3644e4e9768fe1553bcb6d42e73318d7afc"} Nov 21 14:01:38 crc kubenswrapper[4675]: I1121 14:01:38.960189 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.960166562 podStartE2EDuration="1.960166562s" podCreationTimestamp="2025-11-21 14:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:01:38.953208188 +0000 UTC m=+1775.679622915" watchObservedRunningTime="2025-11-21 14:01:38.960166562 +0000 UTC m=+1775.686581289" Nov 21 14:01:39 crc kubenswrapper[4675]: I1121 14:01:39.398666 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 14:01:39 crc kubenswrapper[4675]: I1121 14:01:39.437131 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 14:01:39 crc kubenswrapper[4675]: I1121 14:01:39.437178 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 14:01:44 crc kubenswrapper[4675]: I1121 14:01:44.398364 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 14:01:44 crc kubenswrapper[4675]: I1121 14:01:44.428786 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 14:01:44 crc kubenswrapper[4675]: I1121 14:01:44.437343 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 14:01:44 crc kubenswrapper[4675]: I1121 14:01:44.437376 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 14:01:45 crc kubenswrapper[4675]: I1121 14:01:45.032937 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 14:01:45 crc kubenswrapper[4675]: I1121 14:01:45.451276 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f2e69762-ea6c-4d7a-a407-8373c1c7b734" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.255:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 14:01:45 crc kubenswrapper[4675]: I1121 14:01:45.451314 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f2e69762-ea6c-4d7a-a407-8373c1c7b734" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.255:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 14:01:46 crc kubenswrapper[4675]: I1121 14:01:46.849006 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:01:46 crc kubenswrapper[4675]: E1121 14:01:46.849703 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:01:47 crc kubenswrapper[4675]: I1121 14:01:47.175796 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 14:01:47 crc kubenswrapper[4675]: I1121 14:01:47.635349 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 14:01:47 crc kubenswrapper[4675]: I1121 14:01:47.636326 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 14:01:48 crc kubenswrapper[4675]: I1121 14:01:48.650253 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5bfa0c26-ff80-4079-aef8-6cc1a62ba554" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.0:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 14:01:48 crc kubenswrapper[4675]: I1121 14:01:48.650271 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5bfa0c26-ff80-4079-aef8-6cc1a62ba554" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.0:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 14:01:48 crc kubenswrapper[4675]: E1121 14:01:48.681955 4675 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/24e1f944abb48c2895f5a2c4029cb79d62b4dc6ad8339677687dfeb41ac296b4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/24e1f944abb48c2895f5a2c4029cb79d62b4dc6ad8339677687dfeb41ac296b4/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_nova-metadata-0_78c6e10d-66b8-4566-80d2-ed0ce8b08e64/nova-metadata-log/0.log" to get inode usage: stat /var/log/pods/openstack_nova-metadata-0_78c6e10d-66b8-4566-80d2-ed0ce8b08e64/nova-metadata-log/0.log: no such file or directory Nov 21 14:01:49 crc kubenswrapper[4675]: E1121 14:01:49.095829 4675 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c3cdce16efefa32a7aec60c947c9f031dfe21b62381b90a799a27fa275b331c6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c3cdce16efefa32a7aec60c947c9f031dfe21b62381b90a799a27fa275b331c6/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_nova-metadata-0_78c6e10d-66b8-4566-80d2-ed0ce8b08e64/nova-metadata-metadata/0.log" to get inode usage: stat /var/log/pods/openstack_nova-metadata-0_78c6e10d-66b8-4566-80d2-ed0ce8b08e64/nova-metadata-metadata/0.log: no such file or directory Nov 21 14:01:51 crc kubenswrapper[4675]: W1121 14:01:51.624760 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19c809ef_4117_44b8_a7c0_8b5f0a60dd51.slice/crio-72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d.scope WatchSource:0}: Error finding container 72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d: Status 404 returned error can't find the container with id 72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d Nov 21 14:01:51 crc kubenswrapper[4675]: E1121 14:01:51.720638 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d812b38_ac4b_4262_8642_bfe5c2b19222.slice/crio-aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice/crio-8e3a9b39dd1686be7fb65659e636e0ddab1461cb408df00f6203491fab9d0942\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f9ecd64_d77b_4ec0_92aa_e6531e6fde14.slice/crio-f694966674fcb25b20fa614370f39fd7431eb45365992ebb1de6752c3755badc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bdc914_dba8_4fcf_ba94_31eff03448cb.slice/crio-956ea0c634d63ec2d9912242297b1bf2bd8ec367713f16790313c86de4e3528c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d812b38_ac4b_4262_8642_bfe5c2b19222.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f9ecd64_d77b_4ec0_92aa_e6531e6fde14.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice/crio-21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bdc914_dba8_4fcf_ba94_31eff03448cb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-aaa900d9f3f0e310d32205ca1d577c925a7864d94e885e36cba00ea3154395e9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice/crio-conmon-34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-conmon-785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf05fdf4e_c229_40af_aafe_55dd5beb6cac.slice/crio-347faf4cffe96e3f3472d972ddb45c9e2d1c9a619792db9c363cda22b7f7c448.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f9ecd64_d77b_4ec0_92aa_e6531e6fde14.slice/crio-conmon-a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d812b38_ac4b_4262_8642_bfe5c2b19222.slice/crio-conmon-aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bdc914_dba8_4fcf_ba94_31eff03448cb.slice/crio-5ec584e061baa6705c465e8e8f62e308697d76747d6c12efd0d757a18fc8f45b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bdc914_dba8_4fcf_ba94_31eff03448cb.slice/crio-conmon-956ea0c634d63ec2d9912242297b1bf2bd8ec367713f16790313c86de4e3528c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice/crio-conmon-21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-conmon-4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d812b38_ac4b_4262_8642_bfe5c2b19222.slice/crio-dfcff063b5976329ba022b96169f20d9de2659998b2804e669f7bb315134a2ba\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice/crio-34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf05fdf4e_c229_40af_aafe_55dd5beb6cac.slice/crio-conmon-347faf4cffe96e3f3472d972ddb45c9e2d1c9a619792db9c363cda22b7f7c448.scope\": RecentStats: unable to find data in memory cache]" Nov 21 14:01:51 crc kubenswrapper[4675]: E1121 14:01:51.720702 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f9ecd64_d77b_4ec0_92aa_e6531e6fde14.slice/crio-conmon-a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bdc914_dba8_4fcf_ba94_31eff03448cb.slice/crio-956ea0c634d63ec2d9912242297b1bf2bd8ec367713f16790313c86de4e3528c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f9ecd64_d77b_4ec0_92aa_e6531e6fde14.slice/crio-a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice/crio-21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice/crio-8e3a9b39dd1686be7fb65659e636e0ddab1461cb408df00f6203491fab9d0942\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf05fdf4e_c229_40af_aafe_55dd5beb6cac.slice/crio-347faf4cffe96e3f3472d972ddb45c9e2d1c9a619792db9c363cda22b7f7c448.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-conmon-785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice/crio-conmon-21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-conmon-4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f9ecd64_d77b_4ec0_92aa_e6531e6fde14.slice/crio-f694966674fcb25b20fa614370f39fd7431eb45365992ebb1de6752c3755badc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f9ecd64_d77b_4ec0_92aa_e6531e6fde14.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bdc914_dba8_4fcf_ba94_31eff03448cb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d812b38_ac4b_4262_8642_bfe5c2b19222.slice/crio-conmon-aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-aaa900d9f3f0e310d32205ca1d577c925a7864d94e885e36cba00ea3154395e9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bdc914_dba8_4fcf_ba94_31eff03448cb.slice/crio-conmon-956ea0c634d63ec2d9912242297b1bf2bd8ec367713f16790313c86de4e3528c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d812b38_ac4b_4262_8642_bfe5c2b19222.slice/crio-dfcff063b5976329ba022b96169f20d9de2659998b2804e669f7bb315134a2ba\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d812b38_ac4b_4262_8642_bfe5c2b19222.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf05fdf4e_c229_40af_aafe_55dd5beb6cac.slice/crio-conmon-347faf4cffe96e3f3472d972ddb45c9e2d1c9a619792db9c363cda22b7f7c448.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d812b38_ac4b_4262_8642_bfe5c2b19222.slice/crio-aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice/crio-conmon-34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bdc914_dba8_4fcf_ba94_31eff03448cb.slice/crio-5ec584e061baa6705c465e8e8f62e308697d76747d6c12efd0d757a18fc8f45b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice\": RecentStats: unable to find data in memory cache]" Nov 21 14:01:51 crc kubenswrapper[4675]: E1121 14:01:51.721183 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19c809ef_4117_44b8_a7c0_8b5f0a60dd51.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19c809ef_4117_44b8_a7c0_8b5f0a60dd51.slice/crio-706c73891fdc147c59b9b2cd72b77a75381e8c70a8819fab6b024c5f66436298\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf05fdf4e_c229_40af_aafe_55dd5beb6cac.slice/crio-conmon-347faf4cffe96e3f3472d972ddb45c9e2d1c9a619792db9c363cda22b7f7c448.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19c809ef_4117_44b8_a7c0_8b5f0a60dd51.slice/crio-conmon-72d9dcd93c0299a92f1bd7e6a83f3af125d46595b86640c31cd88246a036fb4d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf05fdf4e_c229_40af_aafe_55dd5beb6cac.slice/crio-347faf4cffe96e3f3472d972ddb45c9e2d1c9a619792db9c363cda22b7f7c448.scope\": RecentStats: unable to find data in memory cache]" Nov 21 14:01:51 crc kubenswrapper[4675]: E1121 14:01:51.722966 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f9ecd64_d77b_4ec0_92aa_e6531e6fde14.slice/crio-conmon-a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bdc914_dba8_4fcf_ba94_31eff03448cb.slice/crio-5ec584e061baa6705c465e8e8f62e308697d76747d6c12efd0d757a18fc8f45b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bdc914_dba8_4fcf_ba94_31eff03448cb.slice/crio-956ea0c634d63ec2d9912242297b1bf2bd8ec367713f16790313c86de4e3528c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice/crio-21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf05fdf4e_c229_40af_aafe_55dd5beb6cac.slice/crio-347faf4cffe96e3f3472d972ddb45c9e2d1c9a619792db9c363cda22b7f7c448.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d812b38_ac4b_4262_8642_bfe5c2b19222.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bdc914_dba8_4fcf_ba94_31eff03448cb.slice/crio-conmon-956ea0c634d63ec2d9912242297b1bf2bd8ec367713f16790313c86de4e3528c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice/crio-conmon-21ac8720140fb8bd7ab3ecec9335cf221c1e1e5db642e11c7870da2df5e426b3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f9ecd64_d77b_4ec0_92aa_e6531e6fde14.slice/crio-f694966674fcb25b20fa614370f39fd7431eb45365992ebb1de6752c3755badc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice/crio-8e3a9b39dd1686be7fb65659e636e0ddab1461cb408df00f6203491fab9d0942\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-conmon-4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice/crio-conmon-34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bdc914_dba8_4fcf_ba94_31eff03448cb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d812b38_ac4b_4262_8642_bfe5c2b19222.slice/crio-aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f9ecd64_d77b_4ec0_92aa_e6531e6fde14.slice/crio-a53c88cdfe84cb420cdeb3ef8a91604caf5530f2d7e0c03dde33ad2b99e2d9b9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-conmon-785207c39784cf508a34ab11e53a19929a71aa4c47b4d6d33933a7fdac864565.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-4d748c7ca5ff65c9c856064a2b5da24ea01fe49087c9478f5835e770607a4bff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice/crio-aaa900d9f3f0e310d32205ca1d577c925a7864d94e885e36cba00ea3154395e9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d812b38_ac4b_4262_8642_bfe5c2b19222.slice/crio-conmon-aa637840cfe5d26a8380a494c6c6da0a4c60854c45ae9f5958fc9c47282e4aca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f9ecd64_d77b_4ec0_92aa_e6531e6fde14.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1a7dc1_fee4_4671_9117_d653c3873ea8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf05fdf4e_c229_40af_aafe_55dd5beb6cac.slice/crio-conmon-347faf4cffe96e3f3472d972ddb45c9e2d1c9a619792db9c363cda22b7f7c448.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d812b38_ac4b_4262_8642_bfe5c2b19222.slice/crio-dfcff063b5976329ba022b96169f20d9de2659998b2804e669f7bb315134a2ba\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6e10d_66b8_4566_80d2_ed0ce8b08e64.slice/crio-34b3a567076240f0269824bf1ec492bb19e1222a3b7c6b9ff4e7052db9e9c139.scope\": RecentStats: unable to find data in memory cache]" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.092497 4675 generic.go:334] "Generic (PLEG): container finished" podID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerID="347faf4cffe96e3f3472d972ddb45c9e2d1c9a619792db9c363cda22b7f7c448" exitCode=137 Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.092552 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05fdf4e-c229-40af-aafe-55dd5beb6cac","Type":"ContainerDied","Data":"347faf4cffe96e3f3472d972ddb45c9e2d1c9a619792db9c363cda22b7f7c448"} Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.092584 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f05fdf4e-c229-40af-aafe-55dd5beb6cac","Type":"ContainerDied","Data":"7f7edcad65017f4a950619188e1d2014fdf6ff4bdc45ba713670578aacd1c646"} Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.092596 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f7edcad65017f4a950619188e1d2014fdf6ff4bdc45ba713670578aacd1c646" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.157646 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.190370 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05fdf4e-c229-40af-aafe-55dd5beb6cac-run-httpd\") pod \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.190529 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbm72\" (UniqueName: \"kubernetes.io/projected/f05fdf4e-c229-40af-aafe-55dd5beb6cac-kube-api-access-sbm72\") pod \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.190617 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-combined-ca-bundle\") pod \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.190653 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-sg-core-conf-yaml\") pod \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.190795 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-scripts\") pod \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.190844 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05fdf4e-c229-40af-aafe-55dd5beb6cac-log-httpd\") pod \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.190909 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-config-data\") pod \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\" (UID: \"f05fdf4e-c229-40af-aafe-55dd5beb6cac\") " Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.190969 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f05fdf4e-c229-40af-aafe-55dd5beb6cac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f05fdf4e-c229-40af-aafe-55dd5beb6cac" (UID: "f05fdf4e-c229-40af-aafe-55dd5beb6cac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.191649 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05fdf4e-c229-40af-aafe-55dd5beb6cac-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.191660 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f05fdf4e-c229-40af-aafe-55dd5beb6cac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f05fdf4e-c229-40af-aafe-55dd5beb6cac" (UID: "f05fdf4e-c229-40af-aafe-55dd5beb6cac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.196296 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f05fdf4e-c229-40af-aafe-55dd5beb6cac-kube-api-access-sbm72" (OuterVolumeSpecName: "kube-api-access-sbm72") pod "f05fdf4e-c229-40af-aafe-55dd5beb6cac" (UID: "f05fdf4e-c229-40af-aafe-55dd5beb6cac"). InnerVolumeSpecName "kube-api-access-sbm72". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.199330 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-scripts" (OuterVolumeSpecName: "scripts") pod "f05fdf4e-c229-40af-aafe-55dd5beb6cac" (UID: "f05fdf4e-c229-40af-aafe-55dd5beb6cac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.236143 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f05fdf4e-c229-40af-aafe-55dd5beb6cac" (UID: "f05fdf4e-c229-40af-aafe-55dd5beb6cac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.294019 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbm72\" (UniqueName: \"kubernetes.io/projected/f05fdf4e-c229-40af-aafe-55dd5beb6cac-kube-api-access-sbm72\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.294060 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.294092 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.294103 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f05fdf4e-c229-40af-aafe-55dd5beb6cac-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.310620 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f05fdf4e-c229-40af-aafe-55dd5beb6cac" (UID: "f05fdf4e-c229-40af-aafe-55dd5beb6cac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.330749 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-config-data" (OuterVolumeSpecName: "config-data") pod "f05fdf4e-c229-40af-aafe-55dd5beb6cac" (UID: "f05fdf4e-c229-40af-aafe-55dd5beb6cac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.396767 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:52 crc kubenswrapper[4675]: I1121 14:01:52.396792 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05fdf4e-c229-40af-aafe-55dd5beb6cac-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.107777 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.136287 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.147852 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.172580 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:53 crc kubenswrapper[4675]: E1121 14:01:53.173385 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="ceilometer-notification-agent" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.173506 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="ceilometer-notification-agent" Nov 21 14:01:53 crc kubenswrapper[4675]: E1121 14:01:53.173589 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="ceilometer-central-agent" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.173653 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="ceilometer-central-agent" Nov 21 14:01:53 crc kubenswrapper[4675]: E1121 14:01:53.173751 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="proxy-httpd" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.173831 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="proxy-httpd" Nov 21 14:01:53 crc kubenswrapper[4675]: E1121 14:01:53.173921 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="sg-core" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.173989 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="sg-core" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.174368 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="proxy-httpd" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.174474 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="ceilometer-notification-agent" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.174558 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="ceilometer-central-agent" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.174639 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" containerName="sg-core" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.177492 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.180664 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.181227 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.191647 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.267480 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr5cg\" (UniqueName: \"kubernetes.io/projected/acd703eb-c34c-4fc3-bb39-8465e146be23-kube-api-access-pr5cg\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.267537 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd703eb-c34c-4fc3-bb39-8465e146be23-log-httpd\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.267566 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-config-data\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.267601 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.267626 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd703eb-c34c-4fc3-bb39-8465e146be23-run-httpd\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.267697 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.267733 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-scripts\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.370310 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.370416 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-scripts\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.370561 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr5cg\" (UniqueName: \"kubernetes.io/projected/acd703eb-c34c-4fc3-bb39-8465e146be23-kube-api-access-pr5cg\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.370594 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd703eb-c34c-4fc3-bb39-8465e146be23-log-httpd\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.370627 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-config-data\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.370674 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.370705 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd703eb-c34c-4fc3-bb39-8465e146be23-run-httpd\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.371401 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd703eb-c34c-4fc3-bb39-8465e146be23-run-httpd\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.371442 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd703eb-c34c-4fc3-bb39-8465e146be23-log-httpd\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.375000 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.375535 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-config-data\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.375551 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.376497 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-scripts\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.387923 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr5cg\" (UniqueName: \"kubernetes.io/projected/acd703eb-c34c-4fc3-bb39-8465e146be23-kube-api-access-pr5cg\") pod \"ceilometer-0\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.495273 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:01:53 crc kubenswrapper[4675]: I1121 14:01:53.971694 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:01:54 crc kubenswrapper[4675]: I1121 14:01:54.122453 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd703eb-c34c-4fc3-bb39-8465e146be23","Type":"ContainerStarted","Data":"f464eed550992a67bc46657a1fbe85a68929ddc80e5705503747d6496004fc50"} Nov 21 14:01:54 crc kubenswrapper[4675]: I1121 14:01:54.441183 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 14:01:54 crc kubenswrapper[4675]: I1121 14:01:54.443184 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 14:01:54 crc kubenswrapper[4675]: I1121 14:01:54.446330 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 14:01:54 crc kubenswrapper[4675]: I1121 14:01:54.863806 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f05fdf4e-c229-40af-aafe-55dd5beb6cac" path="/var/lib/kubelet/pods/f05fdf4e-c229-40af-aafe-55dd5beb6cac/volumes" Nov 21 14:01:55 crc kubenswrapper[4675]: I1121 14:01:55.135303 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd703eb-c34c-4fc3-bb39-8465e146be23","Type":"ContainerStarted","Data":"94e5fbc9dbb7b57380fc661812c59c6ce3b6877d49c2b8148e9f34e0c763a6dd"} Nov 21 14:01:55 crc kubenswrapper[4675]: I1121 14:01:55.140875 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 14:01:55 crc kubenswrapper[4675]: I1121 14:01:55.769184 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:01:56 crc kubenswrapper[4675]: I1121 14:01:56.148114 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd703eb-c34c-4fc3-bb39-8465e146be23","Type":"ContainerStarted","Data":"595869d8b5f0d2c9dd29a5c30068fccee943cde477d601ca5974793c9f2ce69e"} Nov 21 14:01:57 crc kubenswrapper[4675]: I1121 14:01:57.160189 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd703eb-c34c-4fc3-bb39-8465e146be23","Type":"ContainerStarted","Data":"402c61dd4470d5415c179d6ae6ee9c8a0d7c27e0bde595e475f9fe1f5a0223eb"} Nov 21 14:01:57 crc kubenswrapper[4675]: I1121 14:01:57.642117 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 14:01:57 crc kubenswrapper[4675]: I1121 14:01:57.642851 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 14:01:57 crc kubenswrapper[4675]: I1121 14:01:57.645616 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 14:01:57 crc kubenswrapper[4675]: I1121 14:01:57.648941 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 14:01:58 crc kubenswrapper[4675]: I1121 14:01:58.176598 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd703eb-c34c-4fc3-bb39-8465e146be23","Type":"ContainerStarted","Data":"cd43ce72bbec05965876ff7ed3249d73e4e3b71a214a9df06f12b639c051aca8"} Nov 21 14:01:58 crc kubenswrapper[4675]: I1121 14:01:58.176648 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 14:01:58 crc kubenswrapper[4675]: I1121 14:01:58.176664 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 14:01:58 crc kubenswrapper[4675]: I1121 14:01:58.188997 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 14:01:58 crc kubenswrapper[4675]: I1121 14:01:58.210497 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.664099008 podStartE2EDuration="5.21047501s" podCreationTimestamp="2025-11-21 14:01:53 +0000 UTC" firstStartedPulling="2025-11-21 14:01:53.967211165 +0000 UTC m=+1790.693625892" lastFinishedPulling="2025-11-21 14:01:57.513587167 +0000 UTC m=+1794.240001894" observedRunningTime="2025-11-21 14:01:58.203620109 +0000 UTC m=+1794.930034846" watchObservedRunningTime="2025-11-21 14:01:58.21047501 +0000 UTC m=+1794.936889737" Nov 21 14:01:58 crc kubenswrapper[4675]: I1121 14:01:58.850849 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:01:58 crc kubenswrapper[4675]: E1121 14:01:58.851096 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:02:11 crc kubenswrapper[4675]: I1121 14:02:11.848620 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:02:11 crc kubenswrapper[4675]: E1121 14:02:11.849422 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:02:23 crc kubenswrapper[4675]: I1121 14:02:23.506823 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 21 14:02:23 crc kubenswrapper[4675]: I1121 14:02:23.849175 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:02:23 crc kubenswrapper[4675]: E1121 14:02:23.849674 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.037022 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.037543 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="8b27bcc8-0305-4074-8d8d-9bb6e33cf000" containerName="kube-state-metrics" containerID="cri-o://f9191200bca3962b6b080fb2ae51422624b33d0f9420993d332fbf85f37af4b1" gracePeriod=30 Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.182056 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.182398 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="dddea1da-ad44-4caa-8719-55dea099d456" containerName="mysqld-exporter" containerID="cri-o://3216aa07f79f465918a0406038d043d807a151fc6b9aa53791c52762e6776dfa" gracePeriod=30 Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.599679 4675 generic.go:334] "Generic (PLEG): container finished" podID="8b27bcc8-0305-4074-8d8d-9bb6e33cf000" containerID="f9191200bca3962b6b080fb2ae51422624b33d0f9420993d332fbf85f37af4b1" exitCode=2 Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.599771 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8b27bcc8-0305-4074-8d8d-9bb6e33cf000","Type":"ContainerDied","Data":"f9191200bca3962b6b080fb2ae51422624b33d0f9420993d332fbf85f37af4b1"} Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.600034 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8b27bcc8-0305-4074-8d8d-9bb6e33cf000","Type":"ContainerDied","Data":"dcc0e87a87b1c18b153c752d6e64baa98ac5649d3ba6e44ece4da46f91a220c7"} Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.600050 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcc0e87a87b1c18b153c752d6e64baa98ac5649d3ba6e44ece4da46f91a220c7" Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.602028 4675 generic.go:334] "Generic (PLEG): container finished" podID="dddea1da-ad44-4caa-8719-55dea099d456" containerID="3216aa07f79f465918a0406038d043d807a151fc6b9aa53791c52762e6776dfa" exitCode=2 Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.602147 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"dddea1da-ad44-4caa-8719-55dea099d456","Type":"ContainerDied","Data":"3216aa07f79f465918a0406038d043d807a151fc6b9aa53791c52762e6776dfa"} Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.646215 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.725777 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx6ph\" (UniqueName: \"kubernetes.io/projected/8b27bcc8-0305-4074-8d8d-9bb6e33cf000-kube-api-access-gx6ph\") pod \"8b27bcc8-0305-4074-8d8d-9bb6e33cf000\" (UID: \"8b27bcc8-0305-4074-8d8d-9bb6e33cf000\") " Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.732959 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b27bcc8-0305-4074-8d8d-9bb6e33cf000-kube-api-access-gx6ph" (OuterVolumeSpecName: "kube-api-access-gx6ph") pod "8b27bcc8-0305-4074-8d8d-9bb6e33cf000" (UID: "8b27bcc8-0305-4074-8d8d-9bb6e33cf000"). InnerVolumeSpecName "kube-api-access-gx6ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.779792 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.832427 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddea1da-ad44-4caa-8719-55dea099d456-combined-ca-bundle\") pod \"dddea1da-ad44-4caa-8719-55dea099d456\" (UID: \"dddea1da-ad44-4caa-8719-55dea099d456\") " Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.832544 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dddea1da-ad44-4caa-8719-55dea099d456-config-data\") pod \"dddea1da-ad44-4caa-8719-55dea099d456\" (UID: \"dddea1da-ad44-4caa-8719-55dea099d456\") " Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.832735 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvhch\" (UniqueName: \"kubernetes.io/projected/dddea1da-ad44-4caa-8719-55dea099d456-kube-api-access-pvhch\") pod \"dddea1da-ad44-4caa-8719-55dea099d456\" (UID: \"dddea1da-ad44-4caa-8719-55dea099d456\") " Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.833750 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx6ph\" (UniqueName: \"kubernetes.io/projected/8b27bcc8-0305-4074-8d8d-9bb6e33cf000-kube-api-access-gx6ph\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.837810 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dddea1da-ad44-4caa-8719-55dea099d456-kube-api-access-pvhch" (OuterVolumeSpecName: "kube-api-access-pvhch") pod "dddea1da-ad44-4caa-8719-55dea099d456" (UID: "dddea1da-ad44-4caa-8719-55dea099d456"). InnerVolumeSpecName "kube-api-access-pvhch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.870633 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddea1da-ad44-4caa-8719-55dea099d456-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dddea1da-ad44-4caa-8719-55dea099d456" (UID: "dddea1da-ad44-4caa-8719-55dea099d456"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.892663 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddea1da-ad44-4caa-8719-55dea099d456-config-data" (OuterVolumeSpecName: "config-data") pod "dddea1da-ad44-4caa-8719-55dea099d456" (UID: "dddea1da-ad44-4caa-8719-55dea099d456"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.936542 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvhch\" (UniqueName: \"kubernetes.io/projected/dddea1da-ad44-4caa-8719-55dea099d456-kube-api-access-pvhch\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.936583 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dddea1da-ad44-4caa-8719-55dea099d456-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:28 crc kubenswrapper[4675]: I1121 14:02:28.936598 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dddea1da-ad44-4caa-8719-55dea099d456-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.615494 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.615533 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"dddea1da-ad44-4caa-8719-55dea099d456","Type":"ContainerDied","Data":"12a62abd7ee93db6147a776cd92028d52ec6163443e0d87fca81fb27f10b4527"} Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.615495 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.615615 4675 scope.go:117] "RemoveContainer" containerID="3216aa07f79f465918a0406038d043d807a151fc6b9aa53791c52762e6776dfa" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.663239 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.682128 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.697128 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.716418 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.726573 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 14:02:29 crc kubenswrapper[4675]: E1121 14:02:29.727386 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dddea1da-ad44-4caa-8719-55dea099d456" containerName="mysqld-exporter" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.727408 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="dddea1da-ad44-4caa-8719-55dea099d456" containerName="mysqld-exporter" Nov 21 14:02:29 crc kubenswrapper[4675]: E1121 14:02:29.727478 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b27bcc8-0305-4074-8d8d-9bb6e33cf000" containerName="kube-state-metrics" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.727491 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b27bcc8-0305-4074-8d8d-9bb6e33cf000" containerName="kube-state-metrics" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.727792 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b27bcc8-0305-4074-8d8d-9bb6e33cf000" containerName="kube-state-metrics" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.727812 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="dddea1da-ad44-4caa-8719-55dea099d456" containerName="mysqld-exporter" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.728861 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.731525 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.731651 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.736718 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.738535 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.740551 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.741897 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.748918 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.759867 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.854833 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/151c5400-b481-4494-aacd-020595cc112c-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"151c5400-b481-4494-aacd-020595cc112c\") " pod="openstack/mysqld-exporter-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.854917 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a\") " pod="openstack/kube-state-metrics-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.855145 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/151c5400-b481-4494-aacd-020595cc112c-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"151c5400-b481-4494-aacd-020595cc112c\") " pod="openstack/mysqld-exporter-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.855207 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77z94\" (UniqueName: \"kubernetes.io/projected/4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a-kube-api-access-77z94\") pod \"kube-state-metrics-0\" (UID: \"4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a\") " pod="openstack/kube-state-metrics-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.855266 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/151c5400-b481-4494-aacd-020595cc112c-config-data\") pod \"mysqld-exporter-0\" (UID: \"151c5400-b481-4494-aacd-020595cc112c\") " pod="openstack/mysqld-exporter-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.855305 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a\") " pod="openstack/kube-state-metrics-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.855332 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a\") " pod="openstack/kube-state-metrics-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.855423 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjmdn\" (UniqueName: \"kubernetes.io/projected/151c5400-b481-4494-aacd-020595cc112c-kube-api-access-fjmdn\") pod \"mysqld-exporter-0\" (UID: \"151c5400-b481-4494-aacd-020595cc112c\") " pod="openstack/mysqld-exporter-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.959164 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/151c5400-b481-4494-aacd-020595cc112c-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"151c5400-b481-4494-aacd-020595cc112c\") " pod="openstack/mysqld-exporter-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.959292 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a\") " pod="openstack/kube-state-metrics-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.959504 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/151c5400-b481-4494-aacd-020595cc112c-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"151c5400-b481-4494-aacd-020595cc112c\") " pod="openstack/mysqld-exporter-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.959570 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77z94\" (UniqueName: \"kubernetes.io/projected/4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a-kube-api-access-77z94\") pod \"kube-state-metrics-0\" (UID: \"4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a\") " pod="openstack/kube-state-metrics-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.959660 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/151c5400-b481-4494-aacd-020595cc112c-config-data\") pod \"mysqld-exporter-0\" (UID: \"151c5400-b481-4494-aacd-020595cc112c\") " pod="openstack/mysqld-exporter-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.959705 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a\") " pod="openstack/kube-state-metrics-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.959731 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a\") " pod="openstack/kube-state-metrics-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.959880 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjmdn\" (UniqueName: \"kubernetes.io/projected/151c5400-b481-4494-aacd-020595cc112c-kube-api-access-fjmdn\") pod \"mysqld-exporter-0\" (UID: \"151c5400-b481-4494-aacd-020595cc112c\") " pod="openstack/mysqld-exporter-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.965668 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a\") " pod="openstack/kube-state-metrics-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.965735 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a\") " pod="openstack/kube-state-metrics-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.966393 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/151c5400-b481-4494-aacd-020595cc112c-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"151c5400-b481-4494-aacd-020595cc112c\") " pod="openstack/mysqld-exporter-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.972760 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/151c5400-b481-4494-aacd-020595cc112c-config-data\") pod \"mysqld-exporter-0\" (UID: \"151c5400-b481-4494-aacd-020595cc112c\") " pod="openstack/mysqld-exporter-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.976176 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/151c5400-b481-4494-aacd-020595cc112c-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"151c5400-b481-4494-aacd-020595cc112c\") " pod="openstack/mysqld-exporter-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.976490 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77z94\" (UniqueName: \"kubernetes.io/projected/4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a-kube-api-access-77z94\") pod \"kube-state-metrics-0\" (UID: \"4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a\") " pod="openstack/kube-state-metrics-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.976762 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjmdn\" (UniqueName: \"kubernetes.io/projected/151c5400-b481-4494-aacd-020595cc112c-kube-api-access-fjmdn\") pod \"mysqld-exporter-0\" (UID: \"151c5400-b481-4494-aacd-020595cc112c\") " pod="openstack/mysqld-exporter-0" Nov 21 14:02:29 crc kubenswrapper[4675]: I1121 14:02:29.975845 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a\") " pod="openstack/kube-state-metrics-0" Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.057592 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.070770 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.186526 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.186952 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="ceilometer-central-agent" containerID="cri-o://94e5fbc9dbb7b57380fc661812c59c6ce3b6877d49c2b8148e9f34e0c763a6dd" gracePeriod=30 Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.187163 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="proxy-httpd" containerID="cri-o://cd43ce72bbec05965876ff7ed3249d73e4e3b71a214a9df06f12b639c051aca8" gracePeriod=30 Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.187270 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="ceilometer-notification-agent" containerID="cri-o://595869d8b5f0d2c9dd29a5c30068fccee943cde477d601ca5974793c9f2ce69e" gracePeriod=30 Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.187293 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="sg-core" containerID="cri-o://402c61dd4470d5415c179d6ae6ee9c8a0d7c27e0bde595e475f9fe1f5a0223eb" gracePeriod=30 Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.614869 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.627408 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.645682 4675 generic.go:334] "Generic (PLEG): container finished" podID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerID="cd43ce72bbec05965876ff7ed3249d73e4e3b71a214a9df06f12b639c051aca8" exitCode=0 Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.645724 4675 generic.go:334] "Generic (PLEG): container finished" podID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerID="402c61dd4470d5415c179d6ae6ee9c8a0d7c27e0bde595e475f9fe1f5a0223eb" exitCode=2 Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.645732 4675 generic.go:334] "Generic (PLEG): container finished" podID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerID="94e5fbc9dbb7b57380fc661812c59c6ce3b6877d49c2b8148e9f34e0c763a6dd" exitCode=0 Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.645752 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd703eb-c34c-4fc3-bb39-8465e146be23","Type":"ContainerDied","Data":"cd43ce72bbec05965876ff7ed3249d73e4e3b71a214a9df06f12b639c051aca8"} Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.645777 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd703eb-c34c-4fc3-bb39-8465e146be23","Type":"ContainerDied","Data":"402c61dd4470d5415c179d6ae6ee9c8a0d7c27e0bde595e475f9fe1f5a0223eb"} Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.645786 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd703eb-c34c-4fc3-bb39-8465e146be23","Type":"ContainerDied","Data":"94e5fbc9dbb7b57380fc661812c59c6ce3b6877d49c2b8148e9f34e0c763a6dd"} Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.865095 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b27bcc8-0305-4074-8d8d-9bb6e33cf000" path="/var/lib/kubelet/pods/8b27bcc8-0305-4074-8d8d-9bb6e33cf000/volumes" Nov 21 14:02:30 crc kubenswrapper[4675]: I1121 14:02:30.867265 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dddea1da-ad44-4caa-8719-55dea099d456" path="/var/lib/kubelet/pods/dddea1da-ad44-4caa-8719-55dea099d456/volumes" Nov 21 14:02:31 crc kubenswrapper[4675]: I1121 14:02:31.661820 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a","Type":"ContainerStarted","Data":"055c8295c3c435fbf2044aec8a2a1483f82822a32ec5661026d82d20e606a04a"} Nov 21 14:02:31 crc kubenswrapper[4675]: I1121 14:02:31.664317 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"151c5400-b481-4494-aacd-020595cc112c","Type":"ContainerStarted","Data":"7a3d2e87d2212ecb042fd5006c0b700dbf98b28015fd8a577fe589ab5c707359"} Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.210922 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.314952 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-config-data\") pod \"acd703eb-c34c-4fc3-bb39-8465e146be23\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.315361 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd703eb-c34c-4fc3-bb39-8465e146be23-log-httpd\") pod \"acd703eb-c34c-4fc3-bb39-8465e146be23\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.315532 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-scripts\") pod \"acd703eb-c34c-4fc3-bb39-8465e146be23\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.315639 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-sg-core-conf-yaml\") pod \"acd703eb-c34c-4fc3-bb39-8465e146be23\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.315692 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-combined-ca-bundle\") pod \"acd703eb-c34c-4fc3-bb39-8465e146be23\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.315716 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr5cg\" (UniqueName: \"kubernetes.io/projected/acd703eb-c34c-4fc3-bb39-8465e146be23-kube-api-access-pr5cg\") pod \"acd703eb-c34c-4fc3-bb39-8465e146be23\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.315732 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd703eb-c34c-4fc3-bb39-8465e146be23-run-httpd\") pod \"acd703eb-c34c-4fc3-bb39-8465e146be23\" (UID: \"acd703eb-c34c-4fc3-bb39-8465e146be23\") " Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.316525 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acd703eb-c34c-4fc3-bb39-8465e146be23-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "acd703eb-c34c-4fc3-bb39-8465e146be23" (UID: "acd703eb-c34c-4fc3-bb39-8465e146be23"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.317314 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acd703eb-c34c-4fc3-bb39-8465e146be23-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "acd703eb-c34c-4fc3-bb39-8465e146be23" (UID: "acd703eb-c34c-4fc3-bb39-8465e146be23"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.321085 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd703eb-c34c-4fc3-bb39-8465e146be23-kube-api-access-pr5cg" (OuterVolumeSpecName: "kube-api-access-pr5cg") pod "acd703eb-c34c-4fc3-bb39-8465e146be23" (UID: "acd703eb-c34c-4fc3-bb39-8465e146be23"). InnerVolumeSpecName "kube-api-access-pr5cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.322877 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-scripts" (OuterVolumeSpecName: "scripts") pod "acd703eb-c34c-4fc3-bb39-8465e146be23" (UID: "acd703eb-c34c-4fc3-bb39-8465e146be23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.348651 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "acd703eb-c34c-4fc3-bb39-8465e146be23" (UID: "acd703eb-c34c-4fc3-bb39-8465e146be23"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.401257 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acd703eb-c34c-4fc3-bb39-8465e146be23" (UID: "acd703eb-c34c-4fc3-bb39-8465e146be23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.418682 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.418711 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.418725 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.418734 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr5cg\" (UniqueName: \"kubernetes.io/projected/acd703eb-c34c-4fc3-bb39-8465e146be23-kube-api-access-pr5cg\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.418742 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd703eb-c34c-4fc3-bb39-8465e146be23-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.418754 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd703eb-c34c-4fc3-bb39-8465e146be23-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.450430 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-config-data" (OuterVolumeSpecName: "config-data") pod "acd703eb-c34c-4fc3-bb39-8465e146be23" (UID: "acd703eb-c34c-4fc3-bb39-8465e146be23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.520768 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd703eb-c34c-4fc3-bb39-8465e146be23-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.676888 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"151c5400-b481-4494-aacd-020595cc112c","Type":"ContainerStarted","Data":"38d56368c920229e4970c2fa5f4c479c43adcb6d218507be3244f41ce160c031"} Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.680882 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a","Type":"ContainerStarted","Data":"86128f02a0e0b39c21ec39dcec832c74d4a9da788791948bd001fb8f5ed73260"} Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.680933 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.690341 4675 generic.go:334] "Generic (PLEG): container finished" podID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerID="595869d8b5f0d2c9dd29a5c30068fccee943cde477d601ca5974793c9f2ce69e" exitCode=0 Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.690386 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd703eb-c34c-4fc3-bb39-8465e146be23","Type":"ContainerDied","Data":"595869d8b5f0d2c9dd29a5c30068fccee943cde477d601ca5974793c9f2ce69e"} Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.690416 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd703eb-c34c-4fc3-bb39-8465e146be23","Type":"ContainerDied","Data":"f464eed550992a67bc46657a1fbe85a68929ddc80e5705503747d6496004fc50"} Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.690433 4675 scope.go:117] "RemoveContainer" containerID="cd43ce72bbec05965876ff7ed3249d73e4e3b71a214a9df06f12b639c051aca8" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.690563 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.709596 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.938892624 podStartE2EDuration="3.709572979s" podCreationTimestamp="2025-11-21 14:02:29 +0000 UTC" firstStartedPulling="2025-11-21 14:02:30.641220649 +0000 UTC m=+1827.367635376" lastFinishedPulling="2025-11-21 14:02:31.411901004 +0000 UTC m=+1828.138315731" observedRunningTime="2025-11-21 14:02:32.693087023 +0000 UTC m=+1829.419501770" watchObservedRunningTime="2025-11-21 14:02:32.709572979 +0000 UTC m=+1829.435987706" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.730530 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.982342492 podStartE2EDuration="3.730508528s" podCreationTimestamp="2025-11-21 14:02:29 +0000 UTC" firstStartedPulling="2025-11-21 14:02:30.635968667 +0000 UTC m=+1827.362383394" lastFinishedPulling="2025-11-21 14:02:31.384134693 +0000 UTC m=+1828.110549430" observedRunningTime="2025-11-21 14:02:32.715972061 +0000 UTC m=+1829.442386788" watchObservedRunningTime="2025-11-21 14:02:32.730508528 +0000 UTC m=+1829.456923245" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.766512 4675 scope.go:117] "RemoveContainer" containerID="402c61dd4470d5415c179d6ae6ee9c8a0d7c27e0bde595e475f9fe1f5a0223eb" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.769930 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.791399 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.811813 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:02:32 crc kubenswrapper[4675]: E1121 14:02:32.812537 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="ceilometer-notification-agent" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.812557 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="ceilometer-notification-agent" Nov 21 14:02:32 crc kubenswrapper[4675]: E1121 14:02:32.812567 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="ceilometer-central-agent" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.812575 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="ceilometer-central-agent" Nov 21 14:02:32 crc kubenswrapper[4675]: E1121 14:02:32.812606 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="sg-core" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.812613 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="sg-core" Nov 21 14:02:32 crc kubenswrapper[4675]: E1121 14:02:32.812633 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="proxy-httpd" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.812640 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="proxy-httpd" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.812886 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="ceilometer-central-agent" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.812974 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="ceilometer-notification-agent" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.813013 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="sg-core" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.813022 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" containerName="proxy-httpd" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.815150 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.818445 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.818487 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.824178 4675 scope.go:117] "RemoveContainer" containerID="595869d8b5f0d2c9dd29a5c30068fccee943cde477d601ca5974793c9f2ce69e" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.824214 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.827388 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.857927 4675 scope.go:117] "RemoveContainer" containerID="94e5fbc9dbb7b57380fc661812c59c6ce3b6877d49c2b8148e9f34e0c763a6dd" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.864992 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd703eb-c34c-4fc3-bb39-8465e146be23" path="/var/lib/kubelet/pods/acd703eb-c34c-4fc3-bb39-8465e146be23/volumes" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.886232 4675 scope.go:117] "RemoveContainer" containerID="cd43ce72bbec05965876ff7ed3249d73e4e3b71a214a9df06f12b639c051aca8" Nov 21 14:02:32 crc kubenswrapper[4675]: E1121 14:02:32.889496 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd43ce72bbec05965876ff7ed3249d73e4e3b71a214a9df06f12b639c051aca8\": container with ID starting with cd43ce72bbec05965876ff7ed3249d73e4e3b71a214a9df06f12b639c051aca8 not found: ID does not exist" containerID="cd43ce72bbec05965876ff7ed3249d73e4e3b71a214a9df06f12b639c051aca8" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.889538 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd43ce72bbec05965876ff7ed3249d73e4e3b71a214a9df06f12b639c051aca8"} err="failed to get container status \"cd43ce72bbec05965876ff7ed3249d73e4e3b71a214a9df06f12b639c051aca8\": rpc error: code = NotFound desc = could not find container \"cd43ce72bbec05965876ff7ed3249d73e4e3b71a214a9df06f12b639c051aca8\": container with ID starting with cd43ce72bbec05965876ff7ed3249d73e4e3b71a214a9df06f12b639c051aca8 not found: ID does not exist" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.889565 4675 scope.go:117] "RemoveContainer" containerID="402c61dd4470d5415c179d6ae6ee9c8a0d7c27e0bde595e475f9fe1f5a0223eb" Nov 21 14:02:32 crc kubenswrapper[4675]: E1121 14:02:32.890110 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"402c61dd4470d5415c179d6ae6ee9c8a0d7c27e0bde595e475f9fe1f5a0223eb\": container with ID starting with 402c61dd4470d5415c179d6ae6ee9c8a0d7c27e0bde595e475f9fe1f5a0223eb not found: ID does not exist" containerID="402c61dd4470d5415c179d6ae6ee9c8a0d7c27e0bde595e475f9fe1f5a0223eb" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.890148 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402c61dd4470d5415c179d6ae6ee9c8a0d7c27e0bde595e475f9fe1f5a0223eb"} err="failed to get container status \"402c61dd4470d5415c179d6ae6ee9c8a0d7c27e0bde595e475f9fe1f5a0223eb\": rpc error: code = NotFound desc = could not find container \"402c61dd4470d5415c179d6ae6ee9c8a0d7c27e0bde595e475f9fe1f5a0223eb\": container with ID starting with 402c61dd4470d5415c179d6ae6ee9c8a0d7c27e0bde595e475f9fe1f5a0223eb not found: ID does not exist" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.890175 4675 scope.go:117] "RemoveContainer" containerID="595869d8b5f0d2c9dd29a5c30068fccee943cde477d601ca5974793c9f2ce69e" Nov 21 14:02:32 crc kubenswrapper[4675]: E1121 14:02:32.890544 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"595869d8b5f0d2c9dd29a5c30068fccee943cde477d601ca5974793c9f2ce69e\": container with ID starting with 595869d8b5f0d2c9dd29a5c30068fccee943cde477d601ca5974793c9f2ce69e not found: ID does not exist" containerID="595869d8b5f0d2c9dd29a5c30068fccee943cde477d601ca5974793c9f2ce69e" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.890573 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595869d8b5f0d2c9dd29a5c30068fccee943cde477d601ca5974793c9f2ce69e"} err="failed to get container status \"595869d8b5f0d2c9dd29a5c30068fccee943cde477d601ca5974793c9f2ce69e\": rpc error: code = NotFound desc = could not find container \"595869d8b5f0d2c9dd29a5c30068fccee943cde477d601ca5974793c9f2ce69e\": container with ID starting with 595869d8b5f0d2c9dd29a5c30068fccee943cde477d601ca5974793c9f2ce69e not found: ID does not exist" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.890589 4675 scope.go:117] "RemoveContainer" containerID="94e5fbc9dbb7b57380fc661812c59c6ce3b6877d49c2b8148e9f34e0c763a6dd" Nov 21 14:02:32 crc kubenswrapper[4675]: E1121 14:02:32.890996 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94e5fbc9dbb7b57380fc661812c59c6ce3b6877d49c2b8148e9f34e0c763a6dd\": container with ID starting with 94e5fbc9dbb7b57380fc661812c59c6ce3b6877d49c2b8148e9f34e0c763a6dd not found: ID does not exist" containerID="94e5fbc9dbb7b57380fc661812c59c6ce3b6877d49c2b8148e9f34e0c763a6dd" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.891038 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e5fbc9dbb7b57380fc661812c59c6ce3b6877d49c2b8148e9f34e0c763a6dd"} err="failed to get container status \"94e5fbc9dbb7b57380fc661812c59c6ce3b6877d49c2b8148e9f34e0c763a6dd\": rpc error: code = NotFound desc = could not find container \"94e5fbc9dbb7b57380fc661812c59c6ce3b6877d49c2b8148e9f34e0c763a6dd\": container with ID starting with 94e5fbc9dbb7b57380fc661812c59c6ce3b6877d49c2b8148e9f34e0c763a6dd not found: ID does not exist" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.933502 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9glk\" (UniqueName: \"kubernetes.io/projected/4d2f7214-6809-4a18-a4c9-de45af832efa-kube-api-access-q9glk\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.933550 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.933578 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-config-data\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.933643 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d2f7214-6809-4a18-a4c9-de45af832efa-run-httpd\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.933687 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-scripts\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.933764 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.933782 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:32 crc kubenswrapper[4675]: I1121 14:02:32.933824 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d2f7214-6809-4a18-a4c9-de45af832efa-log-httpd\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.036567 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d2f7214-6809-4a18-a4c9-de45af832efa-run-httpd\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.036659 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-scripts\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.036780 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.036823 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.036885 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d2f7214-6809-4a18-a4c9-de45af832efa-log-httpd\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.036974 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9glk\" (UniqueName: \"kubernetes.io/projected/4d2f7214-6809-4a18-a4c9-de45af832efa-kube-api-access-q9glk\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.036999 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.037028 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-config-data\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.037178 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d2f7214-6809-4a18-a4c9-de45af832efa-run-httpd\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.037583 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d2f7214-6809-4a18-a4c9-de45af832efa-log-httpd\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.042200 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.042572 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.042943 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.044328 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-scripts\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.046052 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-config-data\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.055662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9glk\" (UniqueName: \"kubernetes.io/projected/4d2f7214-6809-4a18-a4c9-de45af832efa-kube-api-access-q9glk\") pod \"ceilometer-0\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.142401 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.635463 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:02:33 crc kubenswrapper[4675]: W1121 14:02:33.641856 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d2f7214_6809_4a18_a4c9_de45af832efa.slice/crio-668bad9e667a0c90f9b78987c0f56e67bb493b90bf7d8cca70d9f839955ee5d3 WatchSource:0}: Error finding container 668bad9e667a0c90f9b78987c0f56e67bb493b90bf7d8cca70d9f839955ee5d3: Status 404 returned error can't find the container with id 668bad9e667a0c90f9b78987c0f56e67bb493b90bf7d8cca70d9f839955ee5d3 Nov 21 14:02:33 crc kubenswrapper[4675]: I1121 14:02:33.704446 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d2f7214-6809-4a18-a4c9-de45af832efa","Type":"ContainerStarted","Data":"668bad9e667a0c90f9b78987c0f56e67bb493b90bf7d8cca70d9f839955ee5d3"} Nov 21 14:02:34 crc kubenswrapper[4675]: I1121 14:02:34.716375 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d2f7214-6809-4a18-a4c9-de45af832efa","Type":"ContainerStarted","Data":"901efb6d7c03fe570b6c4eb91ff6dfb3ac4a6414e145b51fa34b6762154ab3d3"} Nov 21 14:02:35 crc kubenswrapper[4675]: I1121 14:02:35.727571 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d2f7214-6809-4a18-a4c9-de45af832efa","Type":"ContainerStarted","Data":"937aa7025375b8beafbf0b86d7881e19fbce16331554c7133d380ce6234eee00"} Nov 21 14:02:36 crc kubenswrapper[4675]: I1121 14:02:36.741993 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d2f7214-6809-4a18-a4c9-de45af832efa","Type":"ContainerStarted","Data":"e24b126f71db79810ad4fa082bcb5a3cc81858cdf953212bdc2064ea2c8bb9c2"} Nov 21 14:02:38 crc kubenswrapper[4675]: I1121 14:02:38.764084 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d2f7214-6809-4a18-a4c9-de45af832efa","Type":"ContainerStarted","Data":"0c19e0b1e462b20f003ce5f685d3b19043503bbbefc49d446aab5d3aa0377240"} Nov 21 14:02:38 crc kubenswrapper[4675]: I1121 14:02:38.764806 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 14:02:38 crc kubenswrapper[4675]: I1121 14:02:38.795747 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.073925814 podStartE2EDuration="6.795726994s" podCreationTimestamp="2025-11-21 14:02:32 +0000 UTC" firstStartedPulling="2025-11-21 14:02:33.644394039 +0000 UTC m=+1830.370808766" lastFinishedPulling="2025-11-21 14:02:37.366195219 +0000 UTC m=+1834.092609946" observedRunningTime="2025-11-21 14:02:38.791866096 +0000 UTC m=+1835.518280823" watchObservedRunningTime="2025-11-21 14:02:38.795726994 +0000 UTC m=+1835.522141721" Nov 21 14:02:38 crc kubenswrapper[4675]: I1121 14:02:38.849714 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:02:38 crc kubenswrapper[4675]: E1121 14:02:38.850000 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.442820 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-n65cw"] Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.454849 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-n65cw"] Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.545454 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-7dswk"] Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.547036 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7dswk" Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.558555 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-7dswk"] Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.706777 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf5e4dd-3414-4e74-a64e-94403684c91b-config-data\") pod \"heat-db-sync-7dswk\" (UID: \"bcf5e4dd-3414-4e74-a64e-94403684c91b\") " pod="openstack/heat-db-sync-7dswk" Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.707341 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4g54\" (UniqueName: \"kubernetes.io/projected/bcf5e4dd-3414-4e74-a64e-94403684c91b-kube-api-access-p4g54\") pod \"heat-db-sync-7dswk\" (UID: \"bcf5e4dd-3414-4e74-a64e-94403684c91b\") " pod="openstack/heat-db-sync-7dswk" Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.707439 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf5e4dd-3414-4e74-a64e-94403684c91b-combined-ca-bundle\") pod \"heat-db-sync-7dswk\" (UID: \"bcf5e4dd-3414-4e74-a64e-94403684c91b\") " pod="openstack/heat-db-sync-7dswk" Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.809366 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf5e4dd-3414-4e74-a64e-94403684c91b-config-data\") pod \"heat-db-sync-7dswk\" (UID: \"bcf5e4dd-3414-4e74-a64e-94403684c91b\") " pod="openstack/heat-db-sync-7dswk" Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.809450 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4g54\" (UniqueName: \"kubernetes.io/projected/bcf5e4dd-3414-4e74-a64e-94403684c91b-kube-api-access-p4g54\") pod \"heat-db-sync-7dswk\" (UID: \"bcf5e4dd-3414-4e74-a64e-94403684c91b\") " pod="openstack/heat-db-sync-7dswk" Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.809500 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf5e4dd-3414-4e74-a64e-94403684c91b-combined-ca-bundle\") pod \"heat-db-sync-7dswk\" (UID: \"bcf5e4dd-3414-4e74-a64e-94403684c91b\") " pod="openstack/heat-db-sync-7dswk" Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.815399 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf5e4dd-3414-4e74-a64e-94403684c91b-combined-ca-bundle\") pod \"heat-db-sync-7dswk\" (UID: \"bcf5e4dd-3414-4e74-a64e-94403684c91b\") " pod="openstack/heat-db-sync-7dswk" Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.827893 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf5e4dd-3414-4e74-a64e-94403684c91b-config-data\") pod \"heat-db-sync-7dswk\" (UID: \"bcf5e4dd-3414-4e74-a64e-94403684c91b\") " pod="openstack/heat-db-sync-7dswk" Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.831799 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4g54\" (UniqueName: \"kubernetes.io/projected/bcf5e4dd-3414-4e74-a64e-94403684c91b-kube-api-access-p4g54\") pod \"heat-db-sync-7dswk\" (UID: \"bcf5e4dd-3414-4e74-a64e-94403684c91b\") " pod="openstack/heat-db-sync-7dswk" Nov 21 14:02:39 crc kubenswrapper[4675]: I1121 14:02:39.878257 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7dswk" Nov 21 14:02:40 crc kubenswrapper[4675]: I1121 14:02:40.092136 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 21 14:02:40 crc kubenswrapper[4675]: I1121 14:02:40.475535 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-7dswk"] Nov 21 14:02:40 crc kubenswrapper[4675]: I1121 14:02:40.794148 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7dswk" event={"ID":"bcf5e4dd-3414-4e74-a64e-94403684c91b","Type":"ContainerStarted","Data":"a8f6a4ab0374b998e7c3609440cd4138cb55b0a3def301ac82a34af625944af8"} Nov 21 14:02:40 crc kubenswrapper[4675]: I1121 14:02:40.865248 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baacdfb7-787a-462a-8102-472a47283224" path="/var/lib/kubelet/pods/baacdfb7-787a-462a-8102-472a47283224/volumes" Nov 21 14:02:42 crc kubenswrapper[4675]: I1121 14:02:42.088675 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 14:02:42 crc kubenswrapper[4675]: I1121 14:02:42.143851 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:02:42 crc kubenswrapper[4675]: I1121 14:02:42.144145 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="proxy-httpd" containerID="cri-o://0c19e0b1e462b20f003ce5f685d3b19043503bbbefc49d446aab5d3aa0377240" gracePeriod=30 Nov 21 14:02:42 crc kubenswrapper[4675]: I1121 14:02:42.144287 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="sg-core" containerID="cri-o://e24b126f71db79810ad4fa082bcb5a3cc81858cdf953212bdc2064ea2c8bb9c2" gracePeriod=30 Nov 21 14:02:42 crc kubenswrapper[4675]: I1121 14:02:42.144335 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="ceilometer-notification-agent" containerID="cri-o://937aa7025375b8beafbf0b86d7881e19fbce16331554c7133d380ce6234eee00" gracePeriod=30 Nov 21 14:02:42 crc kubenswrapper[4675]: I1121 14:02:42.144113 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="ceilometer-central-agent" containerID="cri-o://901efb6d7c03fe570b6c4eb91ff6dfb3ac4a6414e145b51fa34b6762154ab3d3" gracePeriod=30 Nov 21 14:02:42 crc kubenswrapper[4675]: I1121 14:02:42.190987 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 14:02:42 crc kubenswrapper[4675]: E1121 14:02:42.798210 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d2f7214_6809_4a18_a4c9_de45af832efa.slice/crio-901efb6d7c03fe570b6c4eb91ff6dfb3ac4a6414e145b51fa34b6762154ab3d3.scope\": RecentStats: unable to find data in memory cache]" Nov 21 14:02:42 crc kubenswrapper[4675]: I1121 14:02:42.827866 4675 generic.go:334] "Generic (PLEG): container finished" podID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerID="0c19e0b1e462b20f003ce5f685d3b19043503bbbefc49d446aab5d3aa0377240" exitCode=0 Nov 21 14:02:42 crc kubenswrapper[4675]: I1121 14:02:42.827898 4675 generic.go:334] "Generic (PLEG): container finished" podID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerID="e24b126f71db79810ad4fa082bcb5a3cc81858cdf953212bdc2064ea2c8bb9c2" exitCode=2 Nov 21 14:02:42 crc kubenswrapper[4675]: I1121 14:02:42.827916 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d2f7214-6809-4a18-a4c9-de45af832efa","Type":"ContainerDied","Data":"0c19e0b1e462b20f003ce5f685d3b19043503bbbefc49d446aab5d3aa0377240"} Nov 21 14:02:42 crc kubenswrapper[4675]: I1121 14:02:42.827941 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d2f7214-6809-4a18-a4c9-de45af832efa","Type":"ContainerDied","Data":"e24b126f71db79810ad4fa082bcb5a3cc81858cdf953212bdc2064ea2c8bb9c2"} Nov 21 14:02:43 crc kubenswrapper[4675]: I1121 14:02:43.855956 4675 generic.go:334] "Generic (PLEG): container finished" podID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerID="937aa7025375b8beafbf0b86d7881e19fbce16331554c7133d380ce6234eee00" exitCode=0 Nov 21 14:02:43 crc kubenswrapper[4675]: I1121 14:02:43.856290 4675 generic.go:334] "Generic (PLEG): container finished" podID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerID="901efb6d7c03fe570b6c4eb91ff6dfb3ac4a6414e145b51fa34b6762154ab3d3" exitCode=0 Nov 21 14:02:43 crc kubenswrapper[4675]: I1121 14:02:43.856011 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d2f7214-6809-4a18-a4c9-de45af832efa","Type":"ContainerDied","Data":"937aa7025375b8beafbf0b86d7881e19fbce16331554c7133d380ce6234eee00"} Nov 21 14:02:43 crc kubenswrapper[4675]: I1121 14:02:43.856328 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d2f7214-6809-4a18-a4c9-de45af832efa","Type":"ContainerDied","Data":"901efb6d7c03fe570b6c4eb91ff6dfb3ac4a6414e145b51fa34b6762154ab3d3"} Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.098214 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.239495 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d2f7214-6809-4a18-a4c9-de45af832efa-run-httpd\") pod \"4d2f7214-6809-4a18-a4c9-de45af832efa\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.239868 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-combined-ca-bundle\") pod \"4d2f7214-6809-4a18-a4c9-de45af832efa\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.239866 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d2f7214-6809-4a18-a4c9-de45af832efa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4d2f7214-6809-4a18-a4c9-de45af832efa" (UID: "4d2f7214-6809-4a18-a4c9-de45af832efa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.239920 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-sg-core-conf-yaml\") pod \"4d2f7214-6809-4a18-a4c9-de45af832efa\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.240074 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-ceilometer-tls-certs\") pod \"4d2f7214-6809-4a18-a4c9-de45af832efa\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.240202 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-scripts\") pod \"4d2f7214-6809-4a18-a4c9-de45af832efa\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.240296 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d2f7214-6809-4a18-a4c9-de45af832efa-log-httpd\") pod \"4d2f7214-6809-4a18-a4c9-de45af832efa\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.240409 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-config-data\") pod \"4d2f7214-6809-4a18-a4c9-de45af832efa\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.240471 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9glk\" (UniqueName: \"kubernetes.io/projected/4d2f7214-6809-4a18-a4c9-de45af832efa-kube-api-access-q9glk\") pod \"4d2f7214-6809-4a18-a4c9-de45af832efa\" (UID: \"4d2f7214-6809-4a18-a4c9-de45af832efa\") " Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.241173 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d2f7214-6809-4a18-a4c9-de45af832efa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4d2f7214-6809-4a18-a4c9-de45af832efa" (UID: "4d2f7214-6809-4a18-a4c9-de45af832efa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.244337 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d2f7214-6809-4a18-a4c9-de45af832efa-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.244401 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d2f7214-6809-4a18-a4c9-de45af832efa-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.247254 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2f7214-6809-4a18-a4c9-de45af832efa-kube-api-access-q9glk" (OuterVolumeSpecName: "kube-api-access-q9glk") pod "4d2f7214-6809-4a18-a4c9-de45af832efa" (UID: "4d2f7214-6809-4a18-a4c9-de45af832efa"). InnerVolumeSpecName "kube-api-access-q9glk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.266006 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-scripts" (OuterVolumeSpecName: "scripts") pod "4d2f7214-6809-4a18-a4c9-de45af832efa" (UID: "4d2f7214-6809-4a18-a4c9-de45af832efa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.275414 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4d2f7214-6809-4a18-a4c9-de45af832efa" (UID: "4d2f7214-6809-4a18-a4c9-de45af832efa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.346673 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9glk\" (UniqueName: \"kubernetes.io/projected/4d2f7214-6809-4a18-a4c9-de45af832efa-kube-api-access-q9glk\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.346718 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.346728 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.347476 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4d2f7214-6809-4a18-a4c9-de45af832efa" (UID: "4d2f7214-6809-4a18-a4c9-de45af832efa"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.388654 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d2f7214-6809-4a18-a4c9-de45af832efa" (UID: "4d2f7214-6809-4a18-a4c9-de45af832efa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.424271 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-config-data" (OuterVolumeSpecName: "config-data") pod "4d2f7214-6809-4a18-a4c9-de45af832efa" (UID: "4d2f7214-6809-4a18-a4c9-de45af832efa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.449817 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.449855 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.449879 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d2f7214-6809-4a18-a4c9-de45af832efa-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.888353 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d2f7214-6809-4a18-a4c9-de45af832efa","Type":"ContainerDied","Data":"668bad9e667a0c90f9b78987c0f56e67bb493b90bf7d8cca70d9f839955ee5d3"} Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.888428 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.888627 4675 scope.go:117] "RemoveContainer" containerID="0c19e0b1e462b20f003ce5f685d3b19043503bbbefc49d446aab5d3aa0377240" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.946342 4675 scope.go:117] "RemoveContainer" containerID="e24b126f71db79810ad4fa082bcb5a3cc81858cdf953212bdc2064ea2c8bb9c2" Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.950091 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.967192 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:02:44 crc kubenswrapper[4675]: I1121 14:02:44.997956 4675 scope.go:117] "RemoveContainer" containerID="937aa7025375b8beafbf0b86d7881e19fbce16331554c7133d380ce6234eee00" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.028287 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:02:45 crc kubenswrapper[4675]: E1121 14:02:45.029195 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="proxy-httpd" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.029212 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="proxy-httpd" Nov 21 14:02:45 crc kubenswrapper[4675]: E1121 14:02:45.029230 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="ceilometer-central-agent" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.029237 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="ceilometer-central-agent" Nov 21 14:02:45 crc kubenswrapper[4675]: E1121 14:02:45.029278 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="sg-core" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.029284 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="sg-core" Nov 21 14:02:45 crc kubenswrapper[4675]: E1121 14:02:45.029300 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="ceilometer-notification-agent" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.029306 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="ceilometer-notification-agent" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.030009 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="ceilometer-central-agent" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.030026 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="proxy-httpd" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.030072 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="ceilometer-notification-agent" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.037589 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" containerName="sg-core" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.047961 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.061026 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.063327 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.063392 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.079767 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.112777 4675 scope.go:117] "RemoveContainer" containerID="901efb6d7c03fe570b6c4eb91ff6dfb3ac4a6414e145b51fa34b6762154ab3d3" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.181222 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34d8b0b-08da-4455-b70a-e4a7a4dff526-run-httpd\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.181335 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-scripts\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.181592 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.181724 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34d8b0b-08da-4455-b70a-e4a7a4dff526-log-httpd\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.182004 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-config-data\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.182027 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhhtv\" (UniqueName: \"kubernetes.io/projected/d34d8b0b-08da-4455-b70a-e4a7a4dff526-kube-api-access-dhhtv\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.182156 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.182244 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.285274 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-config-data\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.285345 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhhtv\" (UniqueName: \"kubernetes.io/projected/d34d8b0b-08da-4455-b70a-e4a7a4dff526-kube-api-access-dhhtv\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.285399 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.285471 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.285545 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34d8b0b-08da-4455-b70a-e4a7a4dff526-run-httpd\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.285646 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-scripts\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.285688 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.285736 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34d8b0b-08da-4455-b70a-e4a7a4dff526-log-httpd\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.286148 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34d8b0b-08da-4455-b70a-e4a7a4dff526-run-httpd\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.286266 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d34d8b0b-08da-4455-b70a-e4a7a4dff526-log-httpd\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.295037 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.295590 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-config-data\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.298569 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-scripts\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.298564 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.305345 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhhtv\" (UniqueName: \"kubernetes.io/projected/d34d8b0b-08da-4455-b70a-e4a7a4dff526-kube-api-access-dhhtv\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.305406 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d34d8b0b-08da-4455-b70a-e4a7a4dff526-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d34d8b0b-08da-4455-b70a-e4a7a4dff526\") " pod="openstack/ceilometer-0" Nov 21 14:02:45 crc kubenswrapper[4675]: I1121 14:02:45.392454 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 14:02:46 crc kubenswrapper[4675]: W1121 14:02:45.997388 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd34d8b0b_08da_4455_b70a_e4a7a4dff526.slice/crio-db82a2e832367037bbf0199e2da950461672197afbf73052a47ebc0e82a7e1f6 WatchSource:0}: Error finding container db82a2e832367037bbf0199e2da950461672197afbf73052a47ebc0e82a7e1f6: Status 404 returned error can't find the container with id db82a2e832367037bbf0199e2da950461672197afbf73052a47ebc0e82a7e1f6 Nov 21 14:02:46 crc kubenswrapper[4675]: I1121 14:02:46.001423 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 14:02:46 crc kubenswrapper[4675]: I1121 14:02:46.865367 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d2f7214-6809-4a18-a4c9-de45af832efa" path="/var/lib/kubelet/pods/4d2f7214-6809-4a18-a4c9-de45af832efa/volumes" Nov 21 14:02:46 crc kubenswrapper[4675]: I1121 14:02:46.919010 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34d8b0b-08da-4455-b70a-e4a7a4dff526","Type":"ContainerStarted","Data":"db82a2e832367037bbf0199e2da950461672197afbf73052a47ebc0e82a7e1f6"} Nov 21 14:02:49 crc kubenswrapper[4675]: I1121 14:02:49.067147 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="8d25ef58-c63a-4689-9ca0-3955b0a3d1df" containerName="rabbitmq" containerID="cri-o://e716dc77663bdc3d446a01e4770fdedae51229e4b5b9756e41a1f37f3caefc41" gracePeriod=604794 Nov 21 14:02:49 crc kubenswrapper[4675]: I1121 14:02:49.082333 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="22bdc76a-2740-432c-a43f-e0a57fdcb2c4" containerName="rabbitmq" containerID="cri-o://d89564125adb7ca8ec871adff289d88dd828d0d4bdd18a5e3cceb25b024051a0" gracePeriod=604794 Nov 21 14:02:50 crc kubenswrapper[4675]: I1121 14:02:50.849447 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:02:50 crc kubenswrapper[4675]: E1121 14:02:50.850261 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:02:51 crc kubenswrapper[4675]: I1121 14:02:51.154144 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="8d25ef58-c63a-4689-9ca0-3955b0a3d1df" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.126:5671: connect: connection refused" Nov 21 14:02:51 crc kubenswrapper[4675]: I1121 14:02:51.473228 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="22bdc76a-2740-432c-a43f-e0a57fdcb2c4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Nov 21 14:02:59 crc kubenswrapper[4675]: I1121 14:02:59.053924 4675 generic.go:334] "Generic (PLEG): container finished" podID="8d25ef58-c63a-4689-9ca0-3955b0a3d1df" containerID="e716dc77663bdc3d446a01e4770fdedae51229e4b5b9756e41a1f37f3caefc41" exitCode=0 Nov 21 14:02:59 crc kubenswrapper[4675]: I1121 14:02:59.053964 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d25ef58-c63a-4689-9ca0-3955b0a3d1df","Type":"ContainerDied","Data":"e716dc77663bdc3d446a01e4770fdedae51229e4b5b9756e41a1f37f3caefc41"} Nov 21 14:02:59 crc kubenswrapper[4675]: I1121 14:02:59.057050 4675 generic.go:334] "Generic (PLEG): container finished" podID="22bdc76a-2740-432c-a43f-e0a57fdcb2c4" containerID="d89564125adb7ca8ec871adff289d88dd828d0d4bdd18a5e3cceb25b024051a0" exitCode=0 Nov 21 14:02:59 crc kubenswrapper[4675]: I1121 14:02:59.057115 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"22bdc76a-2740-432c-a43f-e0a57fdcb2c4","Type":"ContainerDied","Data":"d89564125adb7ca8ec871adff289d88dd828d0d4bdd18a5e3cceb25b024051a0"} Nov 21 14:03:01 crc kubenswrapper[4675]: I1121 14:03:01.154985 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="8d25ef58-c63a-4689-9ca0-3955b0a3d1df" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.126:5671: connect: connection refused" Nov 21 14:03:01 crc kubenswrapper[4675]: I1121 14:03:01.473311 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="22bdc76a-2740-432c-a43f-e0a57fdcb2c4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Nov 21 14:03:01 crc kubenswrapper[4675]: I1121 14:03:01.849660 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:03:01 crc kubenswrapper[4675]: E1121 14:03:01.850495 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.350436 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-k5lxn"] Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.352874 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.355929 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.368629 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-k5lxn"] Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.472306 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh5xx\" (UniqueName: \"kubernetes.io/projected/3eb6992d-1fd9-4a75-80e9-ddc54114b948-kube-api-access-jh5xx\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.472855 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.473144 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.473861 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-config\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.474160 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.474698 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.474957 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.578048 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.578178 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.578389 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh5xx\" (UniqueName: \"kubernetes.io/projected/3eb6992d-1fd9-4a75-80e9-ddc54114b948-kube-api-access-jh5xx\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.578429 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.578493 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.578527 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-config\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.578592 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.579087 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.580006 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.580050 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.581088 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.581175 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.581227 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-config\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.603800 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh5xx\" (UniqueName: \"kubernetes.io/projected/3eb6992d-1fd9-4a75-80e9-ddc54114b948-kube-api-access-jh5xx\") pod \"dnsmasq-dns-5b75489c6f-k5lxn\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:07 crc kubenswrapper[4675]: I1121 14:03:07.679132 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.719031 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.743646 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883161 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-plugins-conf\") pod \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883281 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-plugins\") pod \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883318 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883368 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb2s5\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-kube-api-access-pb2s5\") pod \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883397 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-server-conf\") pod \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883431 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-config-data\") pod \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883447 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mlzx\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-kube-api-access-9mlzx\") pod \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883472 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-confd\") pod \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883508 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883540 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-tls\") pod \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883554 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-config-data\") pod \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883585 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-pod-info\") pod \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883637 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-erlang-cookie\") pod \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883678 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-pod-info\") pod \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883699 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-erlang-cookie-secret\") pod \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883715 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-confd\") pod \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883743 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-erlang-cookie-secret\") pod \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883768 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-erlang-cookie\") pod \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883808 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-plugins\") pod \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\" (UID: \"8d25ef58-c63a-4689-9ca0-3955b0a3d1df\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883841 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-server-conf\") pod \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883863 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-tls\") pod \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.883910 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-plugins-conf\") pod \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\" (UID: \"22bdc76a-2740-432c-a43f-e0a57fdcb2c4\") " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.890810 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "22bdc76a-2740-432c-a43f-e0a57fdcb2c4" (UID: "22bdc76a-2740-432c-a43f-e0a57fdcb2c4"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.895213 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "8d25ef58-c63a-4689-9ca0-3955b0a3d1df" (UID: "8d25ef58-c63a-4689-9ca0-3955b0a3d1df"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.898531 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8d25ef58-c63a-4689-9ca0-3955b0a3d1df" (UID: "8d25ef58-c63a-4689-9ca0-3955b0a3d1df"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.899713 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-kube-api-access-9mlzx" (OuterVolumeSpecName: "kube-api-access-9mlzx") pod "8d25ef58-c63a-4689-9ca0-3955b0a3d1df" (UID: "8d25ef58-c63a-4689-9ca0-3955b0a3d1df"). InnerVolumeSpecName "kube-api-access-9mlzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.903787 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "22bdc76a-2740-432c-a43f-e0a57fdcb2c4" (UID: "22bdc76a-2740-432c-a43f-e0a57fdcb2c4"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.904178 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "22bdc76a-2740-432c-a43f-e0a57fdcb2c4" (UID: "22bdc76a-2740-432c-a43f-e0a57fdcb2c4"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.914249 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "22bdc76a-2740-432c-a43f-e0a57fdcb2c4" (UID: "22bdc76a-2740-432c-a43f-e0a57fdcb2c4"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.927462 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8d25ef58-c63a-4689-9ca0-3955b0a3d1df" (UID: "8d25ef58-c63a-4689-9ca0-3955b0a3d1df"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.931370 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-kube-api-access-pb2s5" (OuterVolumeSpecName: "kube-api-access-pb2s5") pod "22bdc76a-2740-432c-a43f-e0a57fdcb2c4" (UID: "22bdc76a-2740-432c-a43f-e0a57fdcb2c4"). InnerVolumeSpecName "kube-api-access-pb2s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.931432 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-pod-info" (OuterVolumeSpecName: "pod-info") pod "22bdc76a-2740-432c-a43f-e0a57fdcb2c4" (UID: "22bdc76a-2740-432c-a43f-e0a57fdcb2c4"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.945323 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "22bdc76a-2740-432c-a43f-e0a57fdcb2c4" (UID: "22bdc76a-2740-432c-a43f-e0a57fdcb2c4"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.945891 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-config-data" (OuterVolumeSpecName: "config-data") pod "22bdc76a-2740-432c-a43f-e0a57fdcb2c4" (UID: "22bdc76a-2740-432c-a43f-e0a57fdcb2c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.946749 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8d25ef58-c63a-4689-9ca0-3955b0a3d1df" (UID: "8d25ef58-c63a-4689-9ca0-3955b0a3d1df"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.948044 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8d25ef58-c63a-4689-9ca0-3955b0a3d1df" (UID: "8d25ef58-c63a-4689-9ca0-3955b0a3d1df"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.952577 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-pod-info" (OuterVolumeSpecName: "pod-info") pod "8d25ef58-c63a-4689-9ca0-3955b0a3d1df" (UID: "8d25ef58-c63a-4689-9ca0-3955b0a3d1df"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.963174 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "22bdc76a-2740-432c-a43f-e0a57fdcb2c4" (UID: "22bdc76a-2740-432c-a43f-e0a57fdcb2c4"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.968554 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8d25ef58-c63a-4689-9ca0-3955b0a3d1df" (UID: "8d25ef58-c63a-4689-9ca0-3955b0a3d1df"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.991903 4675 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.991934 4675 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.991943 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.991963 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.992013 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb2s5\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-kube-api-access-pb2s5\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.992044 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mlzx\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-kube-api-access-9mlzx\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.992086 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.992102 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.992115 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.992124 4675 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-pod-info\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.992140 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.992152 4675 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-pod-info\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.992183 4675 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.992196 4675 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.992208 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.992219 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:10 crc kubenswrapper[4675]: I1121 14:03:10.992792 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.037638 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-config-data" (OuterVolumeSpecName: "config-data") pod "8d25ef58-c63a-4689-9ca0-3955b0a3d1df" (UID: "8d25ef58-c63a-4689-9ca0-3955b0a3d1df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.044525 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-server-conf" (OuterVolumeSpecName: "server-conf") pod "8d25ef58-c63a-4689-9ca0-3955b0a3d1df" (UID: "8d25ef58-c63a-4689-9ca0-3955b0a3d1df"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.065535 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.071763 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-server-conf" (OuterVolumeSpecName: "server-conf") pod "22bdc76a-2740-432c-a43f-e0a57fdcb2c4" (UID: "22bdc76a-2740-432c-a43f-e0a57fdcb2c4"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.075354 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.095382 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.095419 4675 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-server-conf\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.095431 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.095444 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.095455 4675 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-server-conf\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.189612 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "22bdc76a-2740-432c-a43f-e0a57fdcb2c4" (UID: "22bdc76a-2740-432c-a43f-e0a57fdcb2c4"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.193280 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8d25ef58-c63a-4689-9ca0-3955b0a3d1df" (UID: "8d25ef58-c63a-4689-9ca0-3955b0a3d1df"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.197937 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d25ef58-c63a-4689-9ca0-3955b0a3d1df-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.197973 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22bdc76a-2740-432c-a43f-e0a57fdcb2c4-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.207288 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d25ef58-c63a-4689-9ca0-3955b0a3d1df","Type":"ContainerDied","Data":"3dfd51c5f72df211062294f51f2fc6079112dbb8d1373b30fcec7bfe871117ac"} Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.207343 4675 scope.go:117] "RemoveContainer" containerID="e716dc77663bdc3d446a01e4770fdedae51229e4b5b9756e41a1f37f3caefc41" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.207486 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.212657 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"22bdc76a-2740-432c-a43f-e0a57fdcb2c4","Type":"ContainerDied","Data":"a326d3797d13f2190847bc801f31a23e443eef2b4bd24b2ccd28f271bdc8cf9a"} Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.212751 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.273679 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.286709 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.304800 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.317951 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.331462 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 14:03:11 crc kubenswrapper[4675]: E1121 14:03:11.332256 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d25ef58-c63a-4689-9ca0-3955b0a3d1df" containerName="setup-container" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.332328 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d25ef58-c63a-4689-9ca0-3955b0a3d1df" containerName="setup-container" Nov 21 14:03:11 crc kubenswrapper[4675]: E1121 14:03:11.332345 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22bdc76a-2740-432c-a43f-e0a57fdcb2c4" containerName="rabbitmq" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.332351 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="22bdc76a-2740-432c-a43f-e0a57fdcb2c4" containerName="rabbitmq" Nov 21 14:03:11 crc kubenswrapper[4675]: E1121 14:03:11.332403 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d25ef58-c63a-4689-9ca0-3955b0a3d1df" containerName="rabbitmq" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.332410 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d25ef58-c63a-4689-9ca0-3955b0a3d1df" containerName="rabbitmq" Nov 21 14:03:11 crc kubenswrapper[4675]: E1121 14:03:11.332429 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22bdc76a-2740-432c-a43f-e0a57fdcb2c4" containerName="setup-container" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.332434 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="22bdc76a-2740-432c-a43f-e0a57fdcb2c4" containerName="setup-container" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.332793 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="22bdc76a-2740-432c-a43f-e0a57fdcb2c4" containerName="rabbitmq" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.332828 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d25ef58-c63a-4689-9ca0-3955b0a3d1df" containerName="rabbitmq" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.334551 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.337951 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.338429 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.338866 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.342982 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.345326 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.346696 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.346914 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.347023 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dd5hp" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.347834 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.351637 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.351738 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.354573 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.356363 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.365397 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.365637 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4nr7c" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.366396 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.366670 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.370425 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.401999 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.402076 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.402116 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-config-data\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.402137 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.402163 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.402200 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.402216 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.402238 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.402265 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.402284 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7h58\" (UniqueName: \"kubernetes.io/projected/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-kube-api-access-q7h58\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.402314 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.402333 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.402351 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.402526 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.402545 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.403443 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.403654 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.403719 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5mfb\" (UniqueName: \"kubernetes.io/projected/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-kube-api-access-c5mfb\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.403782 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.403841 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.403871 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.403896 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506393 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506449 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506488 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506548 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506572 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5mfb\" (UniqueName: \"kubernetes.io/projected/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-kube-api-access-c5mfb\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506602 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506638 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506658 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506678 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506707 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506761 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506809 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-config-data\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506844 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506889 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506940 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506960 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506977 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.507598 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.507640 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.507889 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.508273 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.508514 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.508629 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.506997 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.508671 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-config-data\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.508733 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.508779 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7h58\" (UniqueName: \"kubernetes.io/projected/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-kube-api-access-q7h58\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.509280 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.509463 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.509517 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.509564 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.509776 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.509818 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.510684 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.510867 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.512050 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.513237 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.521114 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.521314 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.523488 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.523438 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5mfb\" (UniqueName: \"kubernetes.io/projected/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-kube-api-access-c5mfb\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.524131 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6b2ab3dd-83aa-4d37-8f44-bb3d277932fb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.526928 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.527261 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7h58\" (UniqueName: \"kubernetes.io/projected/a5ef674f-8b42-40b1-ba1a-fa2d68858b31-kube-api-access-q7h58\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.568202 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.568648 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a5ef674f-8b42-40b1-ba1a-fa2d68858b31\") " pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.676566 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: I1121 14:03:11.693618 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 14:03:11 crc kubenswrapper[4675]: E1121 14:03:11.796243 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Nov 21 14:03:11 crc kubenswrapper[4675]: E1121 14:03:11.796299 4675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Nov 21 14:03:11 crc kubenswrapper[4675]: E1121 14:03:11.796466 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dh56h574h5f8h675h586h688h685hb4h667hbh596h596hdchc5hb9h54ch5dbh64fhb5hf5h5ffhc9h5cchcch9bh7dh68fh66h54fh66bh66fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhhtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d34d8b0b-08da-4455-b70a-e4a7a4dff526): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 14:03:12 crc kubenswrapper[4675]: I1121 14:03:12.106009 4675 scope.go:117] "RemoveContainer" containerID="a22a83806ff53fda2e092623ac08ebbffd804b4823e84aa138ab79f19378f685" Nov 21 14:03:12 crc kubenswrapper[4675]: E1121 14:03:12.106118 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 21 14:03:12 crc kubenswrapper[4675]: E1121 14:03:12.106873 4675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 21 14:03:12 crc kubenswrapper[4675]: E1121 14:03:12.106995 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4g54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7dswk_openstack(bcf5e4dd-3414-4e74-a64e-94403684c91b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 14:03:12 crc kubenswrapper[4675]: E1121 14:03:12.108877 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-7dswk" podUID="bcf5e4dd-3414-4e74-a64e-94403684c91b" Nov 21 14:03:12 crc kubenswrapper[4675]: E1121 14:03:12.278934 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-7dswk" podUID="bcf5e4dd-3414-4e74-a64e-94403684c91b" Nov 21 14:03:12 crc kubenswrapper[4675]: I1121 14:03:12.348307 4675 scope.go:117] "RemoveContainer" containerID="d89564125adb7ca8ec871adff289d88dd828d0d4bdd18a5e3cceb25b024051a0" Nov 21 14:03:12 crc kubenswrapper[4675]: I1121 14:03:12.384361 4675 scope.go:117] "RemoveContainer" containerID="1d6d46a106cd3fc5f9be1fd55be2418cdb5bfcfe23f9faec67f0aa8d972ea46d" Nov 21 14:03:12 crc kubenswrapper[4675]: I1121 14:03:12.682138 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-k5lxn"] Nov 21 14:03:12 crc kubenswrapper[4675]: W1121 14:03:12.786016 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b2ab3dd_83aa_4d37_8f44_bb3d277932fb.slice/crio-0a170a2fccbecefd8751b2226a20f67f6bf1e95b43a691bc03c40e79be462f80 WatchSource:0}: Error finding container 0a170a2fccbecefd8751b2226a20f67f6bf1e95b43a691bc03c40e79be462f80: Status 404 returned error can't find the container with id 0a170a2fccbecefd8751b2226a20f67f6bf1e95b43a691bc03c40e79be462f80 Nov 21 14:03:12 crc kubenswrapper[4675]: W1121 14:03:12.790204 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5ef674f_8b42_40b1_ba1a_fa2d68858b31.slice/crio-cf5287bf1605684a4cf971d1ff11ab61b9834faa44491dbf1d2f32bdee284cdc WatchSource:0}: Error finding container cf5287bf1605684a4cf971d1ff11ab61b9834faa44491dbf1d2f32bdee284cdc: Status 404 returned error can't find the container with id cf5287bf1605684a4cf971d1ff11ab61b9834faa44491dbf1d2f32bdee284cdc Nov 21 14:03:12 crc kubenswrapper[4675]: I1121 14:03:12.795865 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 14:03:12 crc kubenswrapper[4675]: I1121 14:03:12.810026 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 14:03:12 crc kubenswrapper[4675]: I1121 14:03:12.864851 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22bdc76a-2740-432c-a43f-e0a57fdcb2c4" path="/var/lib/kubelet/pods/22bdc76a-2740-432c-a43f-e0a57fdcb2c4/volumes" Nov 21 14:03:12 crc kubenswrapper[4675]: I1121 14:03:12.867424 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d25ef58-c63a-4689-9ca0-3955b0a3d1df" path="/var/lib/kubelet/pods/8d25ef58-c63a-4689-9ca0-3955b0a3d1df/volumes" Nov 21 14:03:13 crc kubenswrapper[4675]: E1121 14:03:13.169647 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb6992d_1fd9_4a75_80e9_ddc54114b948.slice/crio-d9f46fe9721c85dd442e409c7d8fba69ad05398e286fc6391e48af301a767a40.scope\": RecentStats: unable to find data in memory cache]" Nov 21 14:03:13 crc kubenswrapper[4675]: I1121 14:03:13.291893 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a5ef674f-8b42-40b1-ba1a-fa2d68858b31","Type":"ContainerStarted","Data":"cf5287bf1605684a4cf971d1ff11ab61b9834faa44491dbf1d2f32bdee284cdc"} Nov 21 14:03:13 crc kubenswrapper[4675]: I1121 14:03:13.294967 4675 generic.go:334] "Generic (PLEG): container finished" podID="3eb6992d-1fd9-4a75-80e9-ddc54114b948" containerID="d9f46fe9721c85dd442e409c7d8fba69ad05398e286fc6391e48af301a767a40" exitCode=0 Nov 21 14:03:13 crc kubenswrapper[4675]: I1121 14:03:13.295053 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" event={"ID":"3eb6992d-1fd9-4a75-80e9-ddc54114b948","Type":"ContainerDied","Data":"d9f46fe9721c85dd442e409c7d8fba69ad05398e286fc6391e48af301a767a40"} Nov 21 14:03:13 crc kubenswrapper[4675]: I1121 14:03:13.295095 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" event={"ID":"3eb6992d-1fd9-4a75-80e9-ddc54114b948","Type":"ContainerStarted","Data":"b9a4cd807b1b0baa05a60cc1c6854a86f34ce3636c381ed1d08c26c65cd9ce50"} Nov 21 14:03:13 crc kubenswrapper[4675]: I1121 14:03:13.297826 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb","Type":"ContainerStarted","Data":"0a170a2fccbecefd8751b2226a20f67f6bf1e95b43a691bc03c40e79be462f80"} Nov 21 14:03:13 crc kubenswrapper[4675]: I1121 14:03:13.302168 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34d8b0b-08da-4455-b70a-e4a7a4dff526","Type":"ContainerStarted","Data":"bdf305ea1dce1040c1a0e136116a49928eb2c85826a9e7e63ecb9754ca33ee2b"} Nov 21 14:03:14 crc kubenswrapper[4675]: I1121 14:03:14.316308 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34d8b0b-08da-4455-b70a-e4a7a4dff526","Type":"ContainerStarted","Data":"d10be6b34b0341669251939cd8b3fd166fb54ef7ce93776862a91259af717035"} Nov 21 14:03:14 crc kubenswrapper[4675]: I1121 14:03:14.319980 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" event={"ID":"3eb6992d-1fd9-4a75-80e9-ddc54114b948","Type":"ContainerStarted","Data":"84cd214249873fb6c6fac6717456f08ec84d539762d76bf68b48fe01a4bfc0d4"} Nov 21 14:03:14 crc kubenswrapper[4675]: I1121 14:03:14.320159 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:14 crc kubenswrapper[4675]: I1121 14:03:14.346848 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" podStartSLOduration=7.34683109 podStartE2EDuration="7.34683109s" podCreationTimestamp="2025-11-21 14:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:03:14.337960726 +0000 UTC m=+1871.064375463" watchObservedRunningTime="2025-11-21 14:03:14.34683109 +0000 UTC m=+1871.073245817" Nov 21 14:03:15 crc kubenswrapper[4675]: I1121 14:03:15.333127 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a5ef674f-8b42-40b1-ba1a-fa2d68858b31","Type":"ContainerStarted","Data":"9397a766b4d2dc90884b5dfa505979cc29440e7b906cd9fe0468a077b8a8122c"} Nov 21 14:03:15 crc kubenswrapper[4675]: I1121 14:03:15.353539 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb","Type":"ContainerStarted","Data":"72d7601e46916f6ec8f7c6edfbf347d682069270461505cfd00a735725fa4f01"} Nov 21 14:03:15 crc kubenswrapper[4675]: E1121 14:03:15.808627 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="d34d8b0b-08da-4455-b70a-e4a7a4dff526" Nov 21 14:03:16 crc kubenswrapper[4675]: I1121 14:03:16.367120 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34d8b0b-08da-4455-b70a-e4a7a4dff526","Type":"ContainerStarted","Data":"a78cc5cdf651da446f40d91d8dda39b527d4a2e9ccbdf6b8ce9b42fd8b419aa8"} Nov 21 14:03:16 crc kubenswrapper[4675]: E1121 14:03:16.369244 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="d34d8b0b-08da-4455-b70a-e4a7a4dff526" Nov 21 14:03:16 crc kubenswrapper[4675]: I1121 14:03:16.849675 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:03:16 crc kubenswrapper[4675]: E1121 14:03:16.849997 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:03:17 crc kubenswrapper[4675]: I1121 14:03:17.381709 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 14:03:17 crc kubenswrapper[4675]: E1121 14:03:17.385210 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="d34d8b0b-08da-4455-b70a-e4a7a4dff526" Nov 21 14:03:18 crc kubenswrapper[4675]: E1121 14:03:18.399607 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="d34d8b0b-08da-4455-b70a-e4a7a4dff526" Nov 21 14:03:21 crc kubenswrapper[4675]: I1121 14:03:21.081584 4675 scope.go:117] "RemoveContainer" containerID="f9191200bca3962b6b080fb2ae51422624b33d0f9420993d332fbf85f37af4b1" Nov 21 14:03:21 crc kubenswrapper[4675]: I1121 14:03:21.149201 4675 scope.go:117] "RemoveContainer" containerID="b8245c72015cfa270514a816a2f61e2c65001e3fd6f1546f6dd6a0b0924be580" Nov 21 14:03:22 crc kubenswrapper[4675]: I1121 14:03:22.681370 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:22 crc kubenswrapper[4675]: I1121 14:03:22.776615 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-p5kck"] Nov 21 14:03:22 crc kubenswrapper[4675]: I1121 14:03:22.777188 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" podUID="e3d2866a-84e9-4475-bab0-69d4aaa9656f" containerName="dnsmasq-dns" containerID="cri-o://b0a25bfd0b119bacf9da85cb10d40a75cf98db009ab8331e2af6dfe7e4a0fd38" gracePeriod=10 Nov 21 14:03:22 crc kubenswrapper[4675]: I1121 14:03:22.950968 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-hsxmx"] Nov 21 14:03:22 crc kubenswrapper[4675]: I1121 14:03:22.953088 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:22 crc kubenswrapper[4675]: I1121 14:03:22.957962 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" podUID="e3d2866a-84e9-4475-bab0-69d4aaa9656f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.249:5353: connect: connection refused" Nov 21 14:03:22 crc kubenswrapper[4675]: I1121 14:03:22.969232 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-hsxmx"] Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.102282 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.102371 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.102602 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-config\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.102658 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.102829 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpggx\" (UniqueName: \"kubernetes.io/projected/2d80929c-c14e-4ec5-943f-de21d45af551-kube-api-access-jpggx\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.102986 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.103012 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.205418 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.205712 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-config\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.205749 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.205823 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpggx\" (UniqueName: \"kubernetes.io/projected/2d80929c-c14e-4ec5-943f-de21d45af551-kube-api-access-jpggx\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.205872 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.205891 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.205989 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.206809 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.206832 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.206913 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-config\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.207366 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.211227 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.214740 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d80929c-c14e-4ec5-943f-de21d45af551-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.226766 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpggx\" (UniqueName: \"kubernetes.io/projected/2d80929c-c14e-4ec5-943f-de21d45af551-kube-api-access-jpggx\") pod \"dnsmasq-dns-5d75f767dc-hsxmx\" (UID: \"2d80929c-c14e-4ec5-943f-de21d45af551\") " pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.283888 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.475652 4675 generic.go:334] "Generic (PLEG): container finished" podID="e3d2866a-84e9-4475-bab0-69d4aaa9656f" containerID="b0a25bfd0b119bacf9da85cb10d40a75cf98db009ab8331e2af6dfe7e4a0fd38" exitCode=0 Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.475697 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" event={"ID":"e3d2866a-84e9-4475-bab0-69d4aaa9656f","Type":"ContainerDied","Data":"b0a25bfd0b119bacf9da85cb10d40a75cf98db009ab8331e2af6dfe7e4a0fd38"} Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.475837 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.476048 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" event={"ID":"e3d2866a-84e9-4475-bab0-69d4aaa9656f","Type":"ContainerDied","Data":"2e8bc88c31daf01827d3d209a853d1137b9874fde4ae8d9bdc76f2d38aaadc46"} Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.476119 4675 scope.go:117] "RemoveContainer" containerID="b0a25bfd0b119bacf9da85cb10d40a75cf98db009ab8331e2af6dfe7e4a0fd38" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.540995 4675 scope.go:117] "RemoveContainer" containerID="e7202bd477c8e47c7c513fb9200962aee7703b5d5fcc982f6ff91e1f18aedfd9" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.575021 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-ovsdbserver-nb\") pod \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.575146 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-dns-svc\") pod \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.575893 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-config\") pod \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.575959 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-ovsdbserver-sb\") pod \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.576096 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-dns-swift-storage-0\") pod \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.576134 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqj7z\" (UniqueName: \"kubernetes.io/projected/e3d2866a-84e9-4475-bab0-69d4aaa9656f-kube-api-access-dqj7z\") pod \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\" (UID: \"e3d2866a-84e9-4475-bab0-69d4aaa9656f\") " Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.580518 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d2866a-84e9-4475-bab0-69d4aaa9656f-kube-api-access-dqj7z" (OuterVolumeSpecName: "kube-api-access-dqj7z") pod "e3d2866a-84e9-4475-bab0-69d4aaa9656f" (UID: "e3d2866a-84e9-4475-bab0-69d4aaa9656f"). InnerVolumeSpecName "kube-api-access-dqj7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.655799 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e3d2866a-84e9-4475-bab0-69d4aaa9656f" (UID: "e3d2866a-84e9-4475-bab0-69d4aaa9656f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.661086 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-config" (OuterVolumeSpecName: "config") pod "e3d2866a-84e9-4475-bab0-69d4aaa9656f" (UID: "e3d2866a-84e9-4475-bab0-69d4aaa9656f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.681511 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-config\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.681552 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.681568 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqj7z\" (UniqueName: \"kubernetes.io/projected/e3d2866a-84e9-4475-bab0-69d4aaa9656f-kube-api-access-dqj7z\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.684651 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e3d2866a-84e9-4475-bab0-69d4aaa9656f" (UID: "e3d2866a-84e9-4475-bab0-69d4aaa9656f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.685651 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e3d2866a-84e9-4475-bab0-69d4aaa9656f" (UID: "e3d2866a-84e9-4475-bab0-69d4aaa9656f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.690442 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e3d2866a-84e9-4475-bab0-69d4aaa9656f" (UID: "e3d2866a-84e9-4475-bab0-69d4aaa9656f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.783797 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.783869 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.783880 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3d2866a-84e9-4475-bab0-69d4aaa9656f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:23 crc kubenswrapper[4675]: I1121 14:03:23.818308 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-hsxmx"] Nov 21 14:03:24 crc kubenswrapper[4675]: I1121 14:03:24.499681 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-p5kck" Nov 21 14:03:24 crc kubenswrapper[4675]: I1121 14:03:24.508454 4675 generic.go:334] "Generic (PLEG): container finished" podID="2d80929c-c14e-4ec5-943f-de21d45af551" containerID="cf96cf367f9e8e9f9086bf954d5476420a3d3dc984c5ea0b1fd9d4979cabb94f" exitCode=0 Nov 21 14:03:24 crc kubenswrapper[4675]: I1121 14:03:24.508507 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" event={"ID":"2d80929c-c14e-4ec5-943f-de21d45af551","Type":"ContainerDied","Data":"cf96cf367f9e8e9f9086bf954d5476420a3d3dc984c5ea0b1fd9d4979cabb94f"} Nov 21 14:03:24 crc kubenswrapper[4675]: I1121 14:03:24.508544 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" event={"ID":"2d80929c-c14e-4ec5-943f-de21d45af551","Type":"ContainerStarted","Data":"c10b2d4988659b47abb032b13b7e178cb86b3f00825d913812411cfbd62fbccc"} Nov 21 14:03:24 crc kubenswrapper[4675]: I1121 14:03:24.580103 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-p5kck"] Nov 21 14:03:24 crc kubenswrapper[4675]: I1121 14:03:24.599825 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-p5kck"] Nov 21 14:03:24 crc kubenswrapper[4675]: I1121 14:03:24.864906 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3d2866a-84e9-4475-bab0-69d4aaa9656f" path="/var/lib/kubelet/pods/e3d2866a-84e9-4475-bab0-69d4aaa9656f/volumes" Nov 21 14:03:25 crc kubenswrapper[4675]: I1121 14:03:25.520776 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" event={"ID":"2d80929c-c14e-4ec5-943f-de21d45af551","Type":"ContainerStarted","Data":"9bff68d16a0c6a733b7ce9bb1ced7a584ca7055d2b1a9a6f87b7e8d0c179fe55"} Nov 21 14:03:25 crc kubenswrapper[4675]: I1121 14:03:25.522639 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:25 crc kubenswrapper[4675]: I1121 14:03:25.553520 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" podStartSLOduration=3.553494822 podStartE2EDuration="3.553494822s" podCreationTimestamp="2025-11-21 14:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:03:25.54432887 +0000 UTC m=+1882.270743597" watchObservedRunningTime="2025-11-21 14:03:25.553494822 +0000 UTC m=+1882.279909549" Nov 21 14:03:28 crc kubenswrapper[4675]: I1121 14:03:28.559435 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7dswk" event={"ID":"bcf5e4dd-3414-4e74-a64e-94403684c91b","Type":"ContainerStarted","Data":"c6f2bd1c3b48e73def5381dee1f61982c11a69f6da12e9b017d86bc996baf745"} Nov 21 14:03:28 crc kubenswrapper[4675]: I1121 14:03:28.576922 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-7dswk" podStartSLOduration=2.416726392 podStartE2EDuration="49.576899393s" podCreationTimestamp="2025-11-21 14:02:39 +0000 UTC" firstStartedPulling="2025-11-21 14:02:40.475055308 +0000 UTC m=+1837.201470035" lastFinishedPulling="2025-11-21 14:03:27.635228309 +0000 UTC m=+1884.361643036" observedRunningTime="2025-11-21 14:03:28.573873256 +0000 UTC m=+1885.300287993" watchObservedRunningTime="2025-11-21 14:03:28.576899393 +0000 UTC m=+1885.303314130" Nov 21 14:03:28 crc kubenswrapper[4675]: I1121 14:03:28.849717 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:03:28 crc kubenswrapper[4675]: E1121 14:03:28.850057 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:03:30 crc kubenswrapper[4675]: I1121 14:03:30.583195 4675 generic.go:334] "Generic (PLEG): container finished" podID="bcf5e4dd-3414-4e74-a64e-94403684c91b" containerID="c6f2bd1c3b48e73def5381dee1f61982c11a69f6da12e9b017d86bc996baf745" exitCode=0 Nov 21 14:03:30 crc kubenswrapper[4675]: I1121 14:03:30.583303 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7dswk" event={"ID":"bcf5e4dd-3414-4e74-a64e-94403684c91b","Type":"ContainerDied","Data":"c6f2bd1c3b48e73def5381dee1f61982c11a69f6da12e9b017d86bc996baf745"} Nov 21 14:03:30 crc kubenswrapper[4675]: I1121 14:03:30.865436 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 21 14:03:31 crc kubenswrapper[4675]: I1121 14:03:31.596815 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d34d8b0b-08da-4455-b70a-e4a7a4dff526","Type":"ContainerStarted","Data":"6e65ed3fdee5f8cc493ac3c9c88848f1b9d9b0893141b6d639b10f6bfdf715e8"} Nov 21 14:03:31 crc kubenswrapper[4675]: I1121 14:03:31.625144 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5435577069999997 podStartE2EDuration="47.62512834s" podCreationTimestamp="2025-11-21 14:02:44 +0000 UTC" firstStartedPulling="2025-11-21 14:02:46.000918772 +0000 UTC m=+1842.727333519" lastFinishedPulling="2025-11-21 14:03:31.082489425 +0000 UTC m=+1887.808904152" observedRunningTime="2025-11-21 14:03:31.623318264 +0000 UTC m=+1888.349733001" watchObservedRunningTime="2025-11-21 14:03:31.62512834 +0000 UTC m=+1888.351543067" Nov 21 14:03:32 crc kubenswrapper[4675]: I1121 14:03:32.175624 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7dswk" Nov 21 14:03:32 crc kubenswrapper[4675]: I1121 14:03:32.356656 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf5e4dd-3414-4e74-a64e-94403684c91b-config-data\") pod \"bcf5e4dd-3414-4e74-a64e-94403684c91b\" (UID: \"bcf5e4dd-3414-4e74-a64e-94403684c91b\") " Nov 21 14:03:32 crc kubenswrapper[4675]: I1121 14:03:32.356777 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4g54\" (UniqueName: \"kubernetes.io/projected/bcf5e4dd-3414-4e74-a64e-94403684c91b-kube-api-access-p4g54\") pod \"bcf5e4dd-3414-4e74-a64e-94403684c91b\" (UID: \"bcf5e4dd-3414-4e74-a64e-94403684c91b\") " Nov 21 14:03:32 crc kubenswrapper[4675]: I1121 14:03:32.356863 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf5e4dd-3414-4e74-a64e-94403684c91b-combined-ca-bundle\") pod \"bcf5e4dd-3414-4e74-a64e-94403684c91b\" (UID: \"bcf5e4dd-3414-4e74-a64e-94403684c91b\") " Nov 21 14:03:32 crc kubenswrapper[4675]: I1121 14:03:32.363983 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcf5e4dd-3414-4e74-a64e-94403684c91b-kube-api-access-p4g54" (OuterVolumeSpecName: "kube-api-access-p4g54") pod "bcf5e4dd-3414-4e74-a64e-94403684c91b" (UID: "bcf5e4dd-3414-4e74-a64e-94403684c91b"). InnerVolumeSpecName "kube-api-access-p4g54". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:03:32 crc kubenswrapper[4675]: I1121 14:03:32.396413 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcf5e4dd-3414-4e74-a64e-94403684c91b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcf5e4dd-3414-4e74-a64e-94403684c91b" (UID: "bcf5e4dd-3414-4e74-a64e-94403684c91b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:03:32 crc kubenswrapper[4675]: I1121 14:03:32.460763 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4g54\" (UniqueName: \"kubernetes.io/projected/bcf5e4dd-3414-4e74-a64e-94403684c91b-kube-api-access-p4g54\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:32 crc kubenswrapper[4675]: I1121 14:03:32.460803 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf5e4dd-3414-4e74-a64e-94403684c91b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:32 crc kubenswrapper[4675]: I1121 14:03:32.462751 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcf5e4dd-3414-4e74-a64e-94403684c91b-config-data" (OuterVolumeSpecName: "config-data") pod "bcf5e4dd-3414-4e74-a64e-94403684c91b" (UID: "bcf5e4dd-3414-4e74-a64e-94403684c91b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:03:32 crc kubenswrapper[4675]: I1121 14:03:32.563122 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcf5e4dd-3414-4e74-a64e-94403684c91b-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:32 crc kubenswrapper[4675]: I1121 14:03:32.610410 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7dswk" event={"ID":"bcf5e4dd-3414-4e74-a64e-94403684c91b","Type":"ContainerDied","Data":"a8f6a4ab0374b998e7c3609440cd4138cb55b0a3def301ac82a34af625944af8"} Nov 21 14:03:32 crc kubenswrapper[4675]: I1121 14:03:32.610442 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7dswk" Nov 21 14:03:32 crc kubenswrapper[4675]: I1121 14:03:32.610464 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8f6a4ab0374b998e7c3609440cd4138cb55b0a3def301ac82a34af625944af8" Nov 21 14:03:33 crc kubenswrapper[4675]: I1121 14:03:33.285242 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-hsxmx" Nov 21 14:03:33 crc kubenswrapper[4675]: I1121 14:03:33.345717 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-k5lxn"] Nov 21 14:03:33 crc kubenswrapper[4675]: I1121 14:03:33.346048 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" podUID="3eb6992d-1fd9-4a75-80e9-ddc54114b948" containerName="dnsmasq-dns" containerID="cri-o://84cd214249873fb6c6fac6717456f08ec84d539762d76bf68b48fe01a4bfc0d4" gracePeriod=10 Nov 21 14:03:33 crc kubenswrapper[4675]: I1121 14:03:33.643088 4675 generic.go:334] "Generic (PLEG): container finished" podID="3eb6992d-1fd9-4a75-80e9-ddc54114b948" containerID="84cd214249873fb6c6fac6717456f08ec84d539762d76bf68b48fe01a4bfc0d4" exitCode=0 Nov 21 14:03:33 crc kubenswrapper[4675]: I1121 14:03:33.643102 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" event={"ID":"3eb6992d-1fd9-4a75-80e9-ddc54114b948","Type":"ContainerDied","Data":"84cd214249873fb6c6fac6717456f08ec84d539762d76bf68b48fe01a4bfc0d4"} Nov 21 14:03:33 crc kubenswrapper[4675]: I1121 14:03:33.927805 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.112784 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-config\") pod \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.112930 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-dns-swift-storage-0\") pod \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.112983 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-dns-svc\") pod \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.113226 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-openstack-edpm-ipam\") pod \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.113304 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh5xx\" (UniqueName: \"kubernetes.io/projected/3eb6992d-1fd9-4a75-80e9-ddc54114b948-kube-api-access-jh5xx\") pod \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.113408 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-ovsdbserver-sb\") pod \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.113549 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-ovsdbserver-nb\") pod \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\" (UID: \"3eb6992d-1fd9-4a75-80e9-ddc54114b948\") " Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.119077 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7f8454c7d4-x6h7x"] Nov 21 14:03:34 crc kubenswrapper[4675]: E1121 14:03:34.120407 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf5e4dd-3414-4e74-a64e-94403684c91b" containerName="heat-db-sync" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.120428 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf5e4dd-3414-4e74-a64e-94403684c91b" containerName="heat-db-sync" Nov 21 14:03:34 crc kubenswrapper[4675]: E1121 14:03:34.120450 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb6992d-1fd9-4a75-80e9-ddc54114b948" containerName="dnsmasq-dns" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.120457 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb6992d-1fd9-4a75-80e9-ddc54114b948" containerName="dnsmasq-dns" Nov 21 14:03:34 crc kubenswrapper[4675]: E1121 14:03:34.120474 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d2866a-84e9-4475-bab0-69d4aaa9656f" containerName="dnsmasq-dns" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.120481 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d2866a-84e9-4475-bab0-69d4aaa9656f" containerName="dnsmasq-dns" Nov 21 14:03:34 crc kubenswrapper[4675]: E1121 14:03:34.120506 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb6992d-1fd9-4a75-80e9-ddc54114b948" containerName="init" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.120512 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb6992d-1fd9-4a75-80e9-ddc54114b948" containerName="init" Nov 21 14:03:34 crc kubenswrapper[4675]: E1121 14:03:34.120538 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d2866a-84e9-4475-bab0-69d4aaa9656f" containerName="init" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.120546 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d2866a-84e9-4475-bab0-69d4aaa9656f" containerName="init" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.120985 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb6992d-1fd9-4a75-80e9-ddc54114b948" containerName="dnsmasq-dns" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.121020 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d2866a-84e9-4475-bab0-69d4aaa9656f" containerName="dnsmasq-dns" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.121035 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf5e4dd-3414-4e74-a64e-94403684c91b" containerName="heat-db-sync" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.122389 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.171624 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-77f4868784-6nk2h"] Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.174754 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.210115 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb6992d-1fd9-4a75-80e9-ddc54114b948-kube-api-access-jh5xx" (OuterVolumeSpecName: "kube-api-access-jh5xx") pod "3eb6992d-1fd9-4a75-80e9-ddc54114b948" (UID: "3eb6992d-1fd9-4a75-80e9-ddc54114b948"). InnerVolumeSpecName "kube-api-access-jh5xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.213926 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7f8454c7d4-x6h7x"] Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.219935 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zznrq\" (UniqueName: \"kubernetes.io/projected/a48b13f7-e6d5-448e-b83e-be3b66c31fb0-kube-api-access-zznrq\") pod \"heat-engine-7f8454c7d4-x6h7x\" (UID: \"a48b13f7-e6d5-448e-b83e-be3b66c31fb0\") " pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.220111 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a48b13f7-e6d5-448e-b83e-be3b66c31fb0-config-data\") pod \"heat-engine-7f8454c7d4-x6h7x\" (UID: \"a48b13f7-e6d5-448e-b83e-be3b66c31fb0\") " pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.220174 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a48b13f7-e6d5-448e-b83e-be3b66c31fb0-config-data-custom\") pod \"heat-engine-7f8454c7d4-x6h7x\" (UID: \"a48b13f7-e6d5-448e-b83e-be3b66c31fb0\") " pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.220196 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a48b13f7-e6d5-448e-b83e-be3b66c31fb0-combined-ca-bundle\") pod \"heat-engine-7f8454c7d4-x6h7x\" (UID: \"a48b13f7-e6d5-448e-b83e-be3b66c31fb0\") " pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.220256 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh5xx\" (UniqueName: \"kubernetes.io/projected/3eb6992d-1fd9-4a75-80e9-ddc54114b948-kube-api-access-jh5xx\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.228995 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5f97697d96-mp958"] Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.231958 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.249767 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3eb6992d-1fd9-4a75-80e9-ddc54114b948" (UID: "3eb6992d-1fd9-4a75-80e9-ddc54114b948"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.254396 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3eb6992d-1fd9-4a75-80e9-ddc54114b948" (UID: "3eb6992d-1fd9-4a75-80e9-ddc54114b948"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.257342 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-77f4868784-6nk2h"] Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.271599 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-config" (OuterVolumeSpecName: "config") pod "3eb6992d-1fd9-4a75-80e9-ddc54114b948" (UID: "3eb6992d-1fd9-4a75-80e9-ddc54114b948"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.288589 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5f97697d96-mp958"] Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.289080 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "3eb6992d-1fd9-4a75-80e9-ddc54114b948" (UID: "3eb6992d-1fd9-4a75-80e9-ddc54114b948"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.303610 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3eb6992d-1fd9-4a75-80e9-ddc54114b948" (UID: "3eb6992d-1fd9-4a75-80e9-ddc54114b948"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.311712 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3eb6992d-1fd9-4a75-80e9-ddc54114b948" (UID: "3eb6992d-1fd9-4a75-80e9-ddc54114b948"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.321849 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-config-data\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.321902 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-combined-ca-bundle\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.321927 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-config-data-custom\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.321977 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-internal-tls-certs\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.321998 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh6nk\" (UniqueName: \"kubernetes.io/projected/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-kube-api-access-fh6nk\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322012 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-config-data\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322034 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7979v\" (UniqueName: \"kubernetes.io/projected/148b5a1d-39fe-4a33-88ee-97b3383595ff-kube-api-access-7979v\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322057 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-public-tls-certs\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322114 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a48b13f7-e6d5-448e-b83e-be3b66c31fb0-config-data\") pod \"heat-engine-7f8454c7d4-x6h7x\" (UID: \"a48b13f7-e6d5-448e-b83e-be3b66c31fb0\") " pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322131 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-config-data-custom\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322149 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-public-tls-certs\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322204 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-combined-ca-bundle\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322225 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a48b13f7-e6d5-448e-b83e-be3b66c31fb0-config-data-custom\") pod \"heat-engine-7f8454c7d4-x6h7x\" (UID: \"a48b13f7-e6d5-448e-b83e-be3b66c31fb0\") " pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322244 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-internal-tls-certs\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322260 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a48b13f7-e6d5-448e-b83e-be3b66c31fb0-combined-ca-bundle\") pod \"heat-engine-7f8454c7d4-x6h7x\" (UID: \"a48b13f7-e6d5-448e-b83e-be3b66c31fb0\") " pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322309 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zznrq\" (UniqueName: \"kubernetes.io/projected/a48b13f7-e6d5-448e-b83e-be3b66c31fb0-kube-api-access-zznrq\") pod \"heat-engine-7f8454c7d4-x6h7x\" (UID: \"a48b13f7-e6d5-448e-b83e-be3b66c31fb0\") " pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322369 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322383 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322393 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322401 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322410 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.322419 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb6992d-1fd9-4a75-80e9-ddc54114b948-config\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.329783 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a48b13f7-e6d5-448e-b83e-be3b66c31fb0-config-data\") pod \"heat-engine-7f8454c7d4-x6h7x\" (UID: \"a48b13f7-e6d5-448e-b83e-be3b66c31fb0\") " pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.334908 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a48b13f7-e6d5-448e-b83e-be3b66c31fb0-config-data-custom\") pod \"heat-engine-7f8454c7d4-x6h7x\" (UID: \"a48b13f7-e6d5-448e-b83e-be3b66c31fb0\") " pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.336254 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a48b13f7-e6d5-448e-b83e-be3b66c31fb0-combined-ca-bundle\") pod \"heat-engine-7f8454c7d4-x6h7x\" (UID: \"a48b13f7-e6d5-448e-b83e-be3b66c31fb0\") " pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.342099 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zznrq\" (UniqueName: \"kubernetes.io/projected/a48b13f7-e6d5-448e-b83e-be3b66c31fb0-kube-api-access-zznrq\") pod \"heat-engine-7f8454c7d4-x6h7x\" (UID: \"a48b13f7-e6d5-448e-b83e-be3b66c31fb0\") " pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.424830 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-config-data\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.424907 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-combined-ca-bundle\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.424967 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-config-data-custom\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.425654 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-internal-tls-certs\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.425699 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh6nk\" (UniqueName: \"kubernetes.io/projected/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-kube-api-access-fh6nk\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.425722 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-config-data\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.425753 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7979v\" (UniqueName: \"kubernetes.io/projected/148b5a1d-39fe-4a33-88ee-97b3383595ff-kube-api-access-7979v\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.425793 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-public-tls-certs\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.425863 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-config-data-custom\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.425894 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-public-tls-certs\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.426001 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-combined-ca-bundle\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.426050 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-internal-tls-certs\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.431012 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-internal-tls-certs\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.434910 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-config-data\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.435323 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-public-tls-certs\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.437036 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-public-tls-certs\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.437694 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-internal-tls-certs\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.438657 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-combined-ca-bundle\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.438975 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-config-data\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.440411 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/148b5a1d-39fe-4a33-88ee-97b3383595ff-config-data-custom\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.440465 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-config-data-custom\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.441975 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-combined-ca-bundle\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.443837 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7979v\" (UniqueName: \"kubernetes.io/projected/148b5a1d-39fe-4a33-88ee-97b3383595ff-kube-api-access-7979v\") pod \"heat-api-77f4868784-6nk2h\" (UID: \"148b5a1d-39fe-4a33-88ee-97b3383595ff\") " pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.446416 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh6nk\" (UniqueName: \"kubernetes.io/projected/5bdb7df7-dd8e-4fea-9634-65fa6f741de8-kube-api-access-fh6nk\") pod \"heat-cfnapi-5f97697d96-mp958\" (UID: \"5bdb7df7-dd8e-4fea-9634-65fa6f741de8\") " pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.639407 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.667383 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.669170 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" event={"ID":"3eb6992d-1fd9-4a75-80e9-ddc54114b948","Type":"ContainerDied","Data":"b9a4cd807b1b0baa05a60cc1c6854a86f34ce3636c381ed1d08c26c65cd9ce50"} Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.669216 4675 scope.go:117] "RemoveContainer" containerID="84cd214249873fb6c6fac6717456f08ec84d539762d76bf68b48fe01a4bfc0d4" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.669294 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-k5lxn" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.674681 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.708450 4675 scope.go:117] "RemoveContainer" containerID="d9f46fe9721c85dd442e409c7d8fba69ad05398e286fc6391e48af301a767a40" Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.718855 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-k5lxn"] Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.730242 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-k5lxn"] Nov 21 14:03:34 crc kubenswrapper[4675]: I1121 14:03:34.885524 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb6992d-1fd9-4a75-80e9-ddc54114b948" path="/var/lib/kubelet/pods/3eb6992d-1fd9-4a75-80e9-ddc54114b948/volumes" Nov 21 14:03:35 crc kubenswrapper[4675]: I1121 14:03:35.488286 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7f8454c7d4-x6h7x"] Nov 21 14:03:35 crc kubenswrapper[4675]: I1121 14:03:35.601542 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-77f4868784-6nk2h"] Nov 21 14:03:35 crc kubenswrapper[4675]: I1121 14:03:35.700456 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5f97697d96-mp958"] Nov 21 14:03:35 crc kubenswrapper[4675]: I1121 14:03:35.704258 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77f4868784-6nk2h" event={"ID":"148b5a1d-39fe-4a33-88ee-97b3383595ff","Type":"ContainerStarted","Data":"b7bdc424583a92755cdb3d9ad18c7dadedc80cd32dfc2da44243f68f6d5dc023"} Nov 21 14:03:35 crc kubenswrapper[4675]: I1121 14:03:35.707175 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7f8454c7d4-x6h7x" event={"ID":"a48b13f7-e6d5-448e-b83e-be3b66c31fb0","Type":"ContainerStarted","Data":"854dca71c594ca51f9d4d73d1415afd8a8353bb1ced5608c2ef8557d044551e0"} Nov 21 14:03:36 crc kubenswrapper[4675]: I1121 14:03:36.723777 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7f8454c7d4-x6h7x" event={"ID":"a48b13f7-e6d5-448e-b83e-be3b66c31fb0","Type":"ContainerStarted","Data":"f90c3ce2283ec0ad681d9cbe0fcfef2946859a759aab3a39f5547810b6431a50"} Nov 21 14:03:36 crc kubenswrapper[4675]: I1121 14:03:36.723883 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:36 crc kubenswrapper[4675]: I1121 14:03:36.733059 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f97697d96-mp958" event={"ID":"5bdb7df7-dd8e-4fea-9634-65fa6f741de8","Type":"ContainerStarted","Data":"c23e1775b86493dee49d18d6a33e8ff6857b7f31dacab7bb1c5a4aa575c08837"} Nov 21 14:03:36 crc kubenswrapper[4675]: I1121 14:03:36.739536 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7f8454c7d4-x6h7x" podStartSLOduration=3.739513992 podStartE2EDuration="3.739513992s" podCreationTimestamp="2025-11-21 14:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:03:36.738054905 +0000 UTC m=+1893.464469642" watchObservedRunningTime="2025-11-21 14:03:36.739513992 +0000 UTC m=+1893.465928729" Nov 21 14:03:41 crc kubenswrapper[4675]: I1121 14:03:41.849027 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:03:41 crc kubenswrapper[4675]: E1121 14:03:41.849994 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:03:42 crc kubenswrapper[4675]: I1121 14:03:42.806220 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f97697d96-mp958" event={"ID":"5bdb7df7-dd8e-4fea-9634-65fa6f741de8","Type":"ContainerStarted","Data":"04f94713124c85e66d02a60bece4b5a09f6485e822a05ccdbcf32fb074b388ba"} Nov 21 14:03:42 crc kubenswrapper[4675]: I1121 14:03:42.806611 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:42 crc kubenswrapper[4675]: I1121 14:03:42.808648 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77f4868784-6nk2h" event={"ID":"148b5a1d-39fe-4a33-88ee-97b3383595ff","Type":"ContainerStarted","Data":"4cb0ac7c1f9a924ad67f7c277888f996629e15f0d357acbd20c28fe35af5a4d1"} Nov 21 14:03:42 crc kubenswrapper[4675]: I1121 14:03:42.808801 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:42 crc kubenswrapper[4675]: I1121 14:03:42.839795 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5f97697d96-mp958" podStartSLOduration=2.473236947 podStartE2EDuration="8.839766432s" podCreationTimestamp="2025-11-21 14:03:34 +0000 UTC" firstStartedPulling="2025-11-21 14:03:35.719454859 +0000 UTC m=+1892.445869586" lastFinishedPulling="2025-11-21 14:03:42.085984354 +0000 UTC m=+1898.812399071" observedRunningTime="2025-11-21 14:03:42.828941468 +0000 UTC m=+1899.555356205" watchObservedRunningTime="2025-11-21 14:03:42.839766432 +0000 UTC m=+1899.566181159" Nov 21 14:03:42 crc kubenswrapper[4675]: I1121 14:03:42.876279 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-77f4868784-6nk2h" podStartSLOduration=2.396998081 podStartE2EDuration="8.876252953s" podCreationTimestamp="2025-11-21 14:03:34 +0000 UTC" firstStartedPulling="2025-11-21 14:03:35.609598534 +0000 UTC m=+1892.336013261" lastFinishedPulling="2025-11-21 14:03:42.088853366 +0000 UTC m=+1898.815268133" observedRunningTime="2025-11-21 14:03:42.848367969 +0000 UTC m=+1899.574782696" watchObservedRunningTime="2025-11-21 14:03:42.876252953 +0000 UTC m=+1899.602667680" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.419084 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp"] Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.421227 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.428304 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.428457 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.428457 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.428464 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.432539 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp"] Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.573631 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.573734 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.573815 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.573872 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncl8g\" (UniqueName: \"kubernetes.io/projected/de094806-84d9-4903-be6c-c00e33b1e782-kube-api-access-ncl8g\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.676030 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.676129 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.676188 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.676230 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncl8g\" (UniqueName: \"kubernetes.io/projected/de094806-84d9-4903-be6c-c00e33b1e782-kube-api-access-ncl8g\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.681740 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.684177 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.687793 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.693708 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncl8g\" (UniqueName: \"kubernetes.io/projected/de094806-84d9-4903-be6c-c00e33b1e782-kube-api-access-ncl8g\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:03:43 crc kubenswrapper[4675]: I1121 14:03:43.745586 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:03:44 crc kubenswrapper[4675]: I1121 14:03:44.836724 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp"] Nov 21 14:03:44 crc kubenswrapper[4675]: I1121 14:03:44.862015 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" event={"ID":"de094806-84d9-4903-be6c-c00e33b1e782","Type":"ContainerStarted","Data":"4080975bc6e9b88323b47b319faf3716e4e57e5d4926bad600e63e0369b75c89"} Nov 21 14:03:46 crc kubenswrapper[4675]: I1121 14:03:46.876048 4675 generic.go:334] "Generic (PLEG): container finished" podID="6b2ab3dd-83aa-4d37-8f44-bb3d277932fb" containerID="72d7601e46916f6ec8f7c6edfbf347d682069270461505cfd00a735725fa4f01" exitCode=0 Nov 21 14:03:46 crc kubenswrapper[4675]: I1121 14:03:46.876101 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb","Type":"ContainerDied","Data":"72d7601e46916f6ec8f7c6edfbf347d682069270461505cfd00a735725fa4f01"} Nov 21 14:03:46 crc kubenswrapper[4675]: I1121 14:03:46.881964 4675 generic.go:334] "Generic (PLEG): container finished" podID="a5ef674f-8b42-40b1-ba1a-fa2d68858b31" containerID="9397a766b4d2dc90884b5dfa505979cc29440e7b906cd9fe0468a077b8a8122c" exitCode=0 Nov 21 14:03:46 crc kubenswrapper[4675]: I1121 14:03:46.882037 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a5ef674f-8b42-40b1-ba1a-fa2d68858b31","Type":"ContainerDied","Data":"9397a766b4d2dc90884b5dfa505979cc29440e7b906cd9fe0468a077b8a8122c"} Nov 21 14:03:48 crc kubenswrapper[4675]: I1121 14:03:48.931223 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6b2ab3dd-83aa-4d37-8f44-bb3d277932fb","Type":"ContainerStarted","Data":"6f3dcd938231f68ca4afeed89d862d009a9cf8cd0ddab621cff30bb7b54d2106"} Nov 21 14:03:48 crc kubenswrapper[4675]: I1121 14:03:48.932103 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:03:48 crc kubenswrapper[4675]: I1121 14:03:48.935297 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a5ef674f-8b42-40b1-ba1a-fa2d68858b31","Type":"ContainerStarted","Data":"f9be37c664e5acb594843ad0f20f03d14c8d54b99f8577ca83904b23819ded22"} Nov 21 14:03:48 crc kubenswrapper[4675]: I1121 14:03:48.935584 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 21 14:03:48 crc kubenswrapper[4675]: I1121 14:03:48.976061 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.976043653 podStartE2EDuration="37.976043653s" podCreationTimestamp="2025-11-21 14:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:03:48.959242348 +0000 UTC m=+1905.685657075" watchObservedRunningTime="2025-11-21 14:03:48.976043653 +0000 UTC m=+1905.702458380" Nov 21 14:03:49 crc kubenswrapper[4675]: I1121 14:03:49.002518 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.002496701 podStartE2EDuration="38.002496701s" podCreationTimestamp="2025-11-21 14:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:03:48.988419515 +0000 UTC m=+1905.714834262" watchObservedRunningTime="2025-11-21 14:03:49.002496701 +0000 UTC m=+1905.728911438" Nov 21 14:03:54 crc kubenswrapper[4675]: I1121 14:03:54.545819 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-77f4868784-6nk2h" Nov 21 14:03:54 crc kubenswrapper[4675]: I1121 14:03:54.568403 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5f97697d96-mp958" Nov 21 14:03:54 crc kubenswrapper[4675]: I1121 14:03:54.667434 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-84586cb599-nkzm2"] Nov 21 14:03:54 crc kubenswrapper[4675]: I1121 14:03:54.667681 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-84586cb599-nkzm2" podUID="3d968ae6-da72-485c-ac6d-393bbc1363da" containerName="heat-api" containerID="cri-o://79dc15a658e99d0bfb91d43d4d00647d8bd2c017779b4d4cdef3bfd09d83dcd0" gracePeriod=60 Nov 21 14:03:54 crc kubenswrapper[4675]: I1121 14:03:54.693419 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-f94f9658f-ptznm"] Nov 21 14:03:54 crc kubenswrapper[4675]: I1121 14:03:54.694897 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-f94f9658f-ptznm" podUID="f1015b8a-a8a3-4941-8959-2d4fd5aee749" containerName="heat-cfnapi" containerID="cri-o://1103c8bebf280470e3fecf67553e68cb907519aa161d89fb985298e4547f045a" gracePeriod=60 Nov 21 14:03:54 crc kubenswrapper[4675]: I1121 14:03:54.703392 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7f8454c7d4-x6h7x" Nov 21 14:03:54 crc kubenswrapper[4675]: I1121 14:03:54.772200 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7498687b57-vr4xt"] Nov 21 14:03:54 crc kubenswrapper[4675]: I1121 14:03:54.775168 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7498687b57-vr4xt" podUID="0fbe5346-a173-4dc8-97f3-800ada75bf1b" containerName="heat-engine" containerID="cri-o://8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc" gracePeriod=60 Nov 21 14:03:54 crc kubenswrapper[4675]: I1121 14:03:54.881340 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:03:54 crc kubenswrapper[4675]: E1121 14:03:54.890350 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:03:57 crc kubenswrapper[4675]: I1121 14:03:57.950588 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-84586cb599-nkzm2" podUID="3d968ae6-da72-485c-ac6d-393bbc1363da" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.212:8004/healthcheck\": read tcp 10.217.0.2:53958->10.217.0.212:8004: read: connection reset by peer" Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.089280 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" event={"ID":"de094806-84d9-4903-be6c-c00e33b1e782","Type":"ContainerStarted","Data":"23a066863271e3170a095175264ebc092a72a98379fa4aa343b331adaa77f1e2"} Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.091609 4675 generic.go:334] "Generic (PLEG): container finished" podID="3d968ae6-da72-485c-ac6d-393bbc1363da" containerID="79dc15a658e99d0bfb91d43d4d00647d8bd2c017779b4d4cdef3bfd09d83dcd0" exitCode=0 Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.091652 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84586cb599-nkzm2" event={"ID":"3d968ae6-da72-485c-ac6d-393bbc1363da","Type":"ContainerDied","Data":"79dc15a658e99d0bfb91d43d4d00647d8bd2c017779b4d4cdef3bfd09d83dcd0"} Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.114454 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" podStartSLOduration=2.399694796 podStartE2EDuration="15.114438237s" podCreationTimestamp="2025-11-21 14:03:43 +0000 UTC" firstStartedPulling="2025-11-21 14:03:44.843449747 +0000 UTC m=+1901.569864464" lastFinishedPulling="2025-11-21 14:03:57.558193178 +0000 UTC m=+1914.284607905" observedRunningTime="2025-11-21 14:03:58.114344625 +0000 UTC m=+1914.840759352" watchObservedRunningTime="2025-11-21 14:03:58.114438237 +0000 UTC m=+1914.840852964" Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.463690 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.518136 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-public-tls-certs\") pod \"3d968ae6-da72-485c-ac6d-393bbc1363da\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.518242 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-internal-tls-certs\") pod \"3d968ae6-da72-485c-ac6d-393bbc1363da\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.518344 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl4c4\" (UniqueName: \"kubernetes.io/projected/3d968ae6-da72-485c-ac6d-393bbc1363da-kube-api-access-pl4c4\") pod \"3d968ae6-da72-485c-ac6d-393bbc1363da\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.518471 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-config-data\") pod \"3d968ae6-da72-485c-ac6d-393bbc1363da\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.518521 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-combined-ca-bundle\") pod \"3d968ae6-da72-485c-ac6d-393bbc1363da\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.518547 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-config-data-custom\") pod \"3d968ae6-da72-485c-ac6d-393bbc1363da\" (UID: \"3d968ae6-da72-485c-ac6d-393bbc1363da\") " Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.526222 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3d968ae6-da72-485c-ac6d-393bbc1363da" (UID: "3d968ae6-da72-485c-ac6d-393bbc1363da"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.526256 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d968ae6-da72-485c-ac6d-393bbc1363da-kube-api-access-pl4c4" (OuterVolumeSpecName: "kube-api-access-pl4c4") pod "3d968ae6-da72-485c-ac6d-393bbc1363da" (UID: "3d968ae6-da72-485c-ac6d-393bbc1363da"). InnerVolumeSpecName "kube-api-access-pl4c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.557040 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d968ae6-da72-485c-ac6d-393bbc1363da" (UID: "3d968ae6-da72-485c-ac6d-393bbc1363da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.587928 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3d968ae6-da72-485c-ac6d-393bbc1363da" (UID: "3d968ae6-da72-485c-ac6d-393bbc1363da"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.589870 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-config-data" (OuterVolumeSpecName: "config-data") pod "3d968ae6-da72-485c-ac6d-393bbc1363da" (UID: "3d968ae6-da72-485c-ac6d-393bbc1363da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.601658 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3d968ae6-da72-485c-ac6d-393bbc1363da" (UID: "3d968ae6-da72-485c-ac6d-393bbc1363da"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.621550 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl4c4\" (UniqueName: \"kubernetes.io/projected/3d968ae6-da72-485c-ac6d-393bbc1363da-kube-api-access-pl4c4\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.621583 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.621593 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.621605 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.621613 4675 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:58 crc kubenswrapper[4675]: I1121 14:03:58.621621 4675 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d968ae6-da72-485c-ac6d-393bbc1363da-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.103294 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84586cb599-nkzm2" event={"ID":"3d968ae6-da72-485c-ac6d-393bbc1363da","Type":"ContainerDied","Data":"6c9d1bf74f49e55277306b0ff7db779b9ac6709622f663f7416440430629322a"} Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.103364 4675 scope.go:117] "RemoveContainer" containerID="79dc15a658e99d0bfb91d43d4d00647d8bd2c017779b4d4cdef3bfd09d83dcd0" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.104489 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84586cb599-nkzm2" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.138910 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-f94f9658f-ptznm" podUID="f1015b8a-a8a3-4941-8959-2d4fd5aee749" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.213:8000/healthcheck\": read tcp 10.217.0.2:35770->10.217.0.213:8000: read: connection reset by peer" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.139221 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-84586cb599-nkzm2"] Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.149233 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-84586cb599-nkzm2"] Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.645667 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.755666 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-jr7sk"] Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.770060 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-jr7sk"] Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.777396 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-combined-ca-bundle\") pod \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.777566 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-config-data\") pod \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.777631 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swmks\" (UniqueName: \"kubernetes.io/projected/f1015b8a-a8a3-4941-8959-2d4fd5aee749-kube-api-access-swmks\") pod \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.777651 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-config-data-custom\") pod \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.777799 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-public-tls-certs\") pod \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.777827 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-internal-tls-certs\") pod \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\" (UID: \"f1015b8a-a8a3-4941-8959-2d4fd5aee749\") " Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.783172 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1015b8a-a8a3-4941-8959-2d4fd5aee749-kube-api-access-swmks" (OuterVolumeSpecName: "kube-api-access-swmks") pod "f1015b8a-a8a3-4941-8959-2d4fd5aee749" (UID: "f1015b8a-a8a3-4941-8959-2d4fd5aee749"). InnerVolumeSpecName "kube-api-access-swmks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.787140 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f1015b8a-a8a3-4941-8959-2d4fd5aee749" (UID: "f1015b8a-a8a3-4941-8959-2d4fd5aee749"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.861124 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-j42fk"] Nov 21 14:03:59 crc kubenswrapper[4675]: E1121 14:03:59.861761 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1015b8a-a8a3-4941-8959-2d4fd5aee749" containerName="heat-cfnapi" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.861783 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1015b8a-a8a3-4941-8959-2d4fd5aee749" containerName="heat-cfnapi" Nov 21 14:03:59 crc kubenswrapper[4675]: E1121 14:03:59.861812 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d968ae6-da72-485c-ac6d-393bbc1363da" containerName="heat-api" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.861820 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d968ae6-da72-485c-ac6d-393bbc1363da" containerName="heat-api" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.862152 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d968ae6-da72-485c-ac6d-393bbc1363da" containerName="heat-api" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.862189 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1015b8a-a8a3-4941-8959-2d4fd5aee749" containerName="heat-cfnapi" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.863363 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-j42fk" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.865982 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.887898 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swmks\" (UniqueName: \"kubernetes.io/projected/f1015b8a-a8a3-4941-8959-2d4fd5aee749-kube-api-access-swmks\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.887980 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.903143 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-j42fk"] Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.942295 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1015b8a-a8a3-4941-8959-2d4fd5aee749" (UID: "f1015b8a-a8a3-4941-8959-2d4fd5aee749"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.942539 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f1015b8a-a8a3-4941-8959-2d4fd5aee749" (UID: "f1015b8a-a8a3-4941-8959-2d4fd5aee749"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.944243 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-config-data" (OuterVolumeSpecName: "config-data") pod "f1015b8a-a8a3-4941-8959-2d4fd5aee749" (UID: "f1015b8a-a8a3-4941-8959-2d4fd5aee749"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.973261 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f1015b8a-a8a3-4941-8959-2d4fd5aee749" (UID: "f1015b8a-a8a3-4941-8959-2d4fd5aee749"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.990016 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4dft\" (UniqueName: \"kubernetes.io/projected/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-kube-api-access-v4dft\") pod \"aodh-db-sync-j42fk\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " pod="openstack/aodh-db-sync-j42fk" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.990132 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-scripts\") pod \"aodh-db-sync-j42fk\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " pod="openstack/aodh-db-sync-j42fk" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.990187 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-combined-ca-bundle\") pod \"aodh-db-sync-j42fk\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " pod="openstack/aodh-db-sync-j42fk" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.990444 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-config-data\") pod \"aodh-db-sync-j42fk\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " pod="openstack/aodh-db-sync-j42fk" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.990602 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.990624 4675 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.990638 4675 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:59 crc kubenswrapper[4675]: I1121 14:03:59.990648 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1015b8a-a8a3-4941-8959-2d4fd5aee749-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.092495 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-combined-ca-bundle\") pod \"aodh-db-sync-j42fk\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " pod="openstack/aodh-db-sync-j42fk" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.093355 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-config-data\") pod \"aodh-db-sync-j42fk\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " pod="openstack/aodh-db-sync-j42fk" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.093675 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4dft\" (UniqueName: \"kubernetes.io/projected/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-kube-api-access-v4dft\") pod \"aodh-db-sync-j42fk\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " pod="openstack/aodh-db-sync-j42fk" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.093844 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-scripts\") pod \"aodh-db-sync-j42fk\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " pod="openstack/aodh-db-sync-j42fk" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.097206 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-combined-ca-bundle\") pod \"aodh-db-sync-j42fk\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " pod="openstack/aodh-db-sync-j42fk" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.099136 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-config-data\") pod \"aodh-db-sync-j42fk\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " pod="openstack/aodh-db-sync-j42fk" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.099693 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-scripts\") pod \"aodh-db-sync-j42fk\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " pod="openstack/aodh-db-sync-j42fk" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.110914 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4dft\" (UniqueName: \"kubernetes.io/projected/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-kube-api-access-v4dft\") pod \"aodh-db-sync-j42fk\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " pod="openstack/aodh-db-sync-j42fk" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.121994 4675 generic.go:334] "Generic (PLEG): container finished" podID="f1015b8a-a8a3-4941-8959-2d4fd5aee749" containerID="1103c8bebf280470e3fecf67553e68cb907519aa161d89fb985298e4547f045a" exitCode=0 Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.122256 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f94f9658f-ptznm" event={"ID":"f1015b8a-a8a3-4941-8959-2d4fd5aee749","Type":"ContainerDied","Data":"1103c8bebf280470e3fecf67553e68cb907519aa161d89fb985298e4547f045a"} Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.122374 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f94f9658f-ptznm" event={"ID":"f1015b8a-a8a3-4941-8959-2d4fd5aee749","Type":"ContainerDied","Data":"55434c194034a900ab781043a71564447fc178113a0dbbf7de6bc77993ab1fbe"} Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.122449 4675 scope.go:117] "RemoveContainer" containerID="1103c8bebf280470e3fecf67553e68cb907519aa161d89fb985298e4547f045a" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.122453 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-f94f9658f-ptznm" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.164364 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-f94f9658f-ptznm"] Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.164509 4675 scope.go:117] "RemoveContainer" containerID="1103c8bebf280470e3fecf67553e68cb907519aa161d89fb985298e4547f045a" Nov 21 14:04:00 crc kubenswrapper[4675]: E1121 14:04:00.164926 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1103c8bebf280470e3fecf67553e68cb907519aa161d89fb985298e4547f045a\": container with ID starting with 1103c8bebf280470e3fecf67553e68cb907519aa161d89fb985298e4547f045a not found: ID does not exist" containerID="1103c8bebf280470e3fecf67553e68cb907519aa161d89fb985298e4547f045a" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.164958 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1103c8bebf280470e3fecf67553e68cb907519aa161d89fb985298e4547f045a"} err="failed to get container status \"1103c8bebf280470e3fecf67553e68cb907519aa161d89fb985298e4547f045a\": rpc error: code = NotFound desc = could not find container \"1103c8bebf280470e3fecf67553e68cb907519aa161d89fb985298e4547f045a\": container with ID starting with 1103c8bebf280470e3fecf67553e68cb907519aa161d89fb985298e4547f045a not found: ID does not exist" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.175032 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-f94f9658f-ptznm"] Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.338619 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-j42fk" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.822364 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-j42fk"] Nov 21 14:04:00 crc kubenswrapper[4675]: W1121 14:04:00.830783 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a3d3c29_d332_4c74_af5e_60e1f2eea6f4.slice/crio-30b4f4e3b51dd9a920b7ab59341d0983e43710647d82884dda0252964e4d45ce WatchSource:0}: Error finding container 30b4f4e3b51dd9a920b7ab59341d0983e43710647d82884dda0252964e4d45ce: Status 404 returned error can't find the container with id 30b4f4e3b51dd9a920b7ab59341d0983e43710647d82884dda0252964e4d45ce Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.869486 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d968ae6-da72-485c-ac6d-393bbc1363da" path="/var/lib/kubelet/pods/3d968ae6-da72-485c-ac6d-393bbc1363da/volumes" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.870526 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b591ff7-fb92-466b-870c-e2138e739b42" path="/var/lib/kubelet/pods/7b591ff7-fb92-466b-870c-e2138e739b42/volumes" Nov 21 14:04:00 crc kubenswrapper[4675]: I1121 14:04:00.871679 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1015b8a-a8a3-4941-8959-2d4fd5aee749" path="/var/lib/kubelet/pods/f1015b8a-a8a3-4941-8959-2d4fd5aee749/volumes" Nov 21 14:04:01 crc kubenswrapper[4675]: I1121 14:04:01.139175 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-j42fk" event={"ID":"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4","Type":"ContainerStarted","Data":"30b4f4e3b51dd9a920b7ab59341d0983e43710647d82884dda0252964e4d45ce"} Nov 21 14:04:01 crc kubenswrapper[4675]: I1121 14:04:01.679155 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="6b2ab3dd-83aa-4d37-8f44-bb3d277932fb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.8:5671: connect: connection refused" Nov 21 14:04:01 crc kubenswrapper[4675]: I1121 14:04:01.707553 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a5ef674f-8b42-40b1-ba1a-fa2d68858b31" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.9:5671: connect: connection refused" Nov 21 14:04:03 crc kubenswrapper[4675]: E1121 14:04:03.288636 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 14:04:03 crc kubenswrapper[4675]: E1121 14:04:03.318588 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 14:04:03 crc kubenswrapper[4675]: E1121 14:04:03.323174 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 14:04:03 crc kubenswrapper[4675]: E1121 14:04:03.323248 4675 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7498687b57-vr4xt" podUID="0fbe5346-a173-4dc8-97f3-800ada75bf1b" containerName="heat-engine" Nov 21 14:04:07 crc kubenswrapper[4675]: I1121 14:04:07.111721 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 14:04:07 crc kubenswrapper[4675]: I1121 14:04:07.849040 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:04:07 crc kubenswrapper[4675]: E1121 14:04:07.849939 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:04:08 crc kubenswrapper[4675]: I1121 14:04:08.269019 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-j42fk" event={"ID":"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4","Type":"ContainerStarted","Data":"6c92cfbbd49a132d32cbbd6b165f4a8201904ad80a7b5f9f186760d12f4f1ff7"} Nov 21 14:04:08 crc kubenswrapper[4675]: I1121 14:04:08.300994 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-j42fk" podStartSLOduration=3.025461016 podStartE2EDuration="9.300975893s" podCreationTimestamp="2025-11-21 14:03:59 +0000 UTC" firstStartedPulling="2025-11-21 14:04:00.833929402 +0000 UTC m=+1917.560344129" lastFinishedPulling="2025-11-21 14:04:07.109444279 +0000 UTC m=+1923.835859006" observedRunningTime="2025-11-21 14:04:08.289835572 +0000 UTC m=+1925.016250309" watchObservedRunningTime="2025-11-21 14:04:08.300975893 +0000 UTC m=+1925.027390620" Nov 21 14:04:10 crc kubenswrapper[4675]: I1121 14:04:10.313306 4675 generic.go:334] "Generic (PLEG): container finished" podID="8a3d3c29-d332-4c74-af5e-60e1f2eea6f4" containerID="6c92cfbbd49a132d32cbbd6b165f4a8201904ad80a7b5f9f186760d12f4f1ff7" exitCode=0 Nov 21 14:04:10 crc kubenswrapper[4675]: I1121 14:04:10.313742 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-j42fk" event={"ID":"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4","Type":"ContainerDied","Data":"6c92cfbbd49a132d32cbbd6b165f4a8201904ad80a7b5f9f186760d12f4f1ff7"} Nov 21 14:04:11 crc kubenswrapper[4675]: I1121 14:04:11.327683 4675 generic.go:334] "Generic (PLEG): container finished" podID="de094806-84d9-4903-be6c-c00e33b1e782" containerID="23a066863271e3170a095175264ebc092a72a98379fa4aa343b331adaa77f1e2" exitCode=0 Nov 21 14:04:11 crc kubenswrapper[4675]: I1121 14:04:11.327806 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" event={"ID":"de094806-84d9-4903-be6c-c00e33b1e782","Type":"ContainerDied","Data":"23a066863271e3170a095175264ebc092a72a98379fa4aa343b331adaa77f1e2"} Nov 21 14:04:11 crc kubenswrapper[4675]: I1121 14:04:11.678264 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:04:11 crc kubenswrapper[4675]: I1121 14:04:11.696376 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 21 14:04:11 crc kubenswrapper[4675]: I1121 14:04:11.786250 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-j42fk" Nov 21 14:04:11 crc kubenswrapper[4675]: I1121 14:04:11.919208 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-combined-ca-bundle\") pod \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " Nov 21 14:04:11 crc kubenswrapper[4675]: I1121 14:04:11.919539 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4dft\" (UniqueName: \"kubernetes.io/projected/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-kube-api-access-v4dft\") pod \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " Nov 21 14:04:11 crc kubenswrapper[4675]: I1121 14:04:11.919635 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-config-data\") pod \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " Nov 21 14:04:11 crc kubenswrapper[4675]: I1121 14:04:11.919668 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-scripts\") pod \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\" (UID: \"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4\") " Nov 21 14:04:11 crc kubenswrapper[4675]: I1121 14:04:11.926134 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-kube-api-access-v4dft" (OuterVolumeSpecName: "kube-api-access-v4dft") pod "8a3d3c29-d332-4c74-af5e-60e1f2eea6f4" (UID: "8a3d3c29-d332-4c74-af5e-60e1f2eea6f4"). InnerVolumeSpecName "kube-api-access-v4dft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:04:11 crc kubenswrapper[4675]: I1121 14:04:11.937396 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-scripts" (OuterVolumeSpecName: "scripts") pod "8a3d3c29-d332-4c74-af5e-60e1f2eea6f4" (UID: "8a3d3c29-d332-4c74-af5e-60e1f2eea6f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:11 crc kubenswrapper[4675]: I1121 14:04:11.968076 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-config-data" (OuterVolumeSpecName: "config-data") pod "8a3d3c29-d332-4c74-af5e-60e1f2eea6f4" (UID: "8a3d3c29-d332-4c74-af5e-60e1f2eea6f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:11 crc kubenswrapper[4675]: I1121 14:04:11.972733 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a3d3c29-d332-4c74-af5e-60e1f2eea6f4" (UID: "8a3d3c29-d332-4c74-af5e-60e1f2eea6f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:12 crc kubenswrapper[4675]: I1121 14:04:12.025968 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:12 crc kubenswrapper[4675]: I1121 14:04:12.026006 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:12 crc kubenswrapper[4675]: I1121 14:04:12.026015 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:12 crc kubenswrapper[4675]: I1121 14:04:12.026026 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4dft\" (UniqueName: \"kubernetes.io/projected/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4-kube-api-access-v4dft\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:12 crc kubenswrapper[4675]: I1121 14:04:12.342370 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-j42fk" event={"ID":"8a3d3c29-d332-4c74-af5e-60e1f2eea6f4","Type":"ContainerDied","Data":"30b4f4e3b51dd9a920b7ab59341d0983e43710647d82884dda0252964e4d45ce"} Nov 21 14:04:12 crc kubenswrapper[4675]: I1121 14:04:12.342437 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30b4f4e3b51dd9a920b7ab59341d0983e43710647d82884dda0252964e4d45ce" Nov 21 14:04:12 crc kubenswrapper[4675]: I1121 14:04:12.342384 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-j42fk" Nov 21 14:04:12 crc kubenswrapper[4675]: I1121 14:04:12.957265 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.077390 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-repo-setup-combined-ca-bundle\") pod \"de094806-84d9-4903-be6c-c00e33b1e782\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.077781 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-inventory\") pod \"de094806-84d9-4903-be6c-c00e33b1e782\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.077995 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-ssh-key\") pod \"de094806-84d9-4903-be6c-c00e33b1e782\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.078023 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncl8g\" (UniqueName: \"kubernetes.io/projected/de094806-84d9-4903-be6c-c00e33b1e782-kube-api-access-ncl8g\") pod \"de094806-84d9-4903-be6c-c00e33b1e782\" (UID: \"de094806-84d9-4903-be6c-c00e33b1e782\") " Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.087516 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "de094806-84d9-4903-be6c-c00e33b1e782" (UID: "de094806-84d9-4903-be6c-c00e33b1e782"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.103544 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de094806-84d9-4903-be6c-c00e33b1e782-kube-api-access-ncl8g" (OuterVolumeSpecName: "kube-api-access-ncl8g") pod "de094806-84d9-4903-be6c-c00e33b1e782" (UID: "de094806-84d9-4903-be6c-c00e33b1e782"). InnerVolumeSpecName "kube-api-access-ncl8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.122385 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-inventory" (OuterVolumeSpecName: "inventory") pod "de094806-84d9-4903-be6c-c00e33b1e782" (UID: "de094806-84d9-4903-be6c-c00e33b1e782"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.132708 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "de094806-84d9-4903-be6c-c00e33b1e782" (UID: "de094806-84d9-4903-be6c-c00e33b1e782"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.181429 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.181468 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncl8g\" (UniqueName: \"kubernetes.io/projected/de094806-84d9-4903-be6c-c00e33b1e782-kube-api-access-ncl8g\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.181485 4675 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.181497 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de094806-84d9-4903-be6c-c00e33b1e782-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:13 crc kubenswrapper[4675]: E1121 14:04:13.279203 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 14:04:13 crc kubenswrapper[4675]: E1121 14:04:13.280554 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 14:04:13 crc kubenswrapper[4675]: E1121 14:04:13.281708 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 14:04:13 crc kubenswrapper[4675]: E1121 14:04:13.281757 4675 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7498687b57-vr4xt" podUID="0fbe5346-a173-4dc8-97f3-800ada75bf1b" containerName="heat-engine" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.356582 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" event={"ID":"de094806-84d9-4903-be6c-c00e33b1e782","Type":"ContainerDied","Data":"4080975bc6e9b88323b47b319faf3716e4e57e5d4926bad600e63e0369b75c89"} Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.357217 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4080975bc6e9b88323b47b319faf3716e4e57e5d4926bad600e63e0369b75c89" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.357395 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.453051 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s"] Nov 21 14:04:13 crc kubenswrapper[4675]: E1121 14:04:13.453686 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de094806-84d9-4903-be6c-c00e33b1e782" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.453710 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="de094806-84d9-4903-be6c-c00e33b1e782" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 21 14:04:13 crc kubenswrapper[4675]: E1121 14:04:13.453776 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a3d3c29-d332-4c74-af5e-60e1f2eea6f4" containerName="aodh-db-sync" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.453785 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a3d3c29-d332-4c74-af5e-60e1f2eea6f4" containerName="aodh-db-sync" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.454047 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a3d3c29-d332-4c74-af5e-60e1f2eea6f4" containerName="aodh-db-sync" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.454108 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="de094806-84d9-4903-be6c-c00e33b1e782" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.454892 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.458720 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.458763 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.458763 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.458932 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.466991 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s"] Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.590692 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2xt7s\" (UID: \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.590766 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2xt7s\" (UID: \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.590924 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bxgj\" (UniqueName: \"kubernetes.io/projected/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-kube-api-access-4bxgj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2xt7s\" (UID: \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.692823 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bxgj\" (UniqueName: \"kubernetes.io/projected/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-kube-api-access-4bxgj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2xt7s\" (UID: \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.693003 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2xt7s\" (UID: \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.693075 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2xt7s\" (UID: \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.697361 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2xt7s\" (UID: \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.702107 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2xt7s\" (UID: \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.712907 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bxgj\" (UniqueName: \"kubernetes.io/projected/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-kube-api-access-4bxgj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2xt7s\" (UID: \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" Nov 21 14:04:13 crc kubenswrapper[4675]: I1121 14:04:13.777818 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" Nov 21 14:04:14 crc kubenswrapper[4675]: I1121 14:04:14.543591 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s"] Nov 21 14:04:14 crc kubenswrapper[4675]: I1121 14:04:14.815710 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 21 14:04:14 crc kubenswrapper[4675]: I1121 14:04:14.816134 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-api" containerID="cri-o://2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1" gracePeriod=30 Nov 21 14:04:14 crc kubenswrapper[4675]: I1121 14:04:14.816195 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-listener" containerID="cri-o://fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875" gracePeriod=30 Nov 21 14:04:14 crc kubenswrapper[4675]: I1121 14:04:14.816239 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-notifier" containerID="cri-o://be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305" gracePeriod=30 Nov 21 14:04:14 crc kubenswrapper[4675]: I1121 14:04:14.816366 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-evaluator" containerID="cri-o://78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4" gracePeriod=30 Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.382919 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" event={"ID":"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c","Type":"ContainerStarted","Data":"fd2f54a306996762424764ad719e22da28acc73c3935833defd6e704a6e15feb"} Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.383494 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" event={"ID":"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c","Type":"ContainerStarted","Data":"f83f7914977c8a6ec7aadea66c97d397c79ea21ff22ebd2d36cbdacf74589cb3"} Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.386157 4675 generic.go:334] "Generic (PLEG): container finished" podID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerID="78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4" exitCode=0 Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.386194 4675 generic.go:334] "Generic (PLEG): container finished" podID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerID="2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1" exitCode=0 Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.386229 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c","Type":"ContainerDied","Data":"78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4"} Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.386262 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c","Type":"ContainerDied","Data":"2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1"} Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.914544 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.933897 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" podStartSLOduration=2.414183088 podStartE2EDuration="2.933872554s" podCreationTimestamp="2025-11-21 14:04:13 +0000 UTC" firstStartedPulling="2025-11-21 14:04:14.547887858 +0000 UTC m=+1931.274302585" lastFinishedPulling="2025-11-21 14:04:15.067577324 +0000 UTC m=+1931.793992051" observedRunningTime="2025-11-21 14:04:15.400432071 +0000 UTC m=+1932.126846808" watchObservedRunningTime="2025-11-21 14:04:15.933872554 +0000 UTC m=+1932.660287281" Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.948440 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-combined-ca-bundle\") pod \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.948521 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-config-data-custom\") pod \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.948684 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2plbb\" (UniqueName: \"kubernetes.io/projected/0fbe5346-a173-4dc8-97f3-800ada75bf1b-kube-api-access-2plbb\") pod \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.948915 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-config-data\") pod \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\" (UID: \"0fbe5346-a173-4dc8-97f3-800ada75bf1b\") " Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.955611 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0fbe5346-a173-4dc8-97f3-800ada75bf1b" (UID: "0fbe5346-a173-4dc8-97f3-800ada75bf1b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.956682 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fbe5346-a173-4dc8-97f3-800ada75bf1b-kube-api-access-2plbb" (OuterVolumeSpecName: "kube-api-access-2plbb") pod "0fbe5346-a173-4dc8-97f3-800ada75bf1b" (UID: "0fbe5346-a173-4dc8-97f3-800ada75bf1b"). InnerVolumeSpecName "kube-api-access-2plbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:04:15 crc kubenswrapper[4675]: I1121 14:04:15.982317 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fbe5346-a173-4dc8-97f3-800ada75bf1b" (UID: "0fbe5346-a173-4dc8-97f3-800ada75bf1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.026309 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-config-data" (OuterVolumeSpecName: "config-data") pod "0fbe5346-a173-4dc8-97f3-800ada75bf1b" (UID: "0fbe5346-a173-4dc8-97f3-800ada75bf1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.051819 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2plbb\" (UniqueName: \"kubernetes.io/projected/0fbe5346-a173-4dc8-97f3-800ada75bf1b-kube-api-access-2plbb\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.051870 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.051890 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.051909 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fbe5346-a173-4dc8-97f3-800ada75bf1b-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.400488 4675 generic.go:334] "Generic (PLEG): container finished" podID="0fbe5346-a173-4dc8-97f3-800ada75bf1b" containerID="8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc" exitCode=0 Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.400569 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7498687b57-vr4xt" event={"ID":"0fbe5346-a173-4dc8-97f3-800ada75bf1b","Type":"ContainerDied","Data":"8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc"} Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.400633 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7498687b57-vr4xt" event={"ID":"0fbe5346-a173-4dc8-97f3-800ada75bf1b","Type":"ContainerDied","Data":"2ddfc1ed09cc32d3f69bed90e16cd7afc2d86d1b714229b87bb6a48b0f4c3266"} Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.400657 4675 scope.go:117] "RemoveContainer" containerID="8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc" Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.401207 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7498687b57-vr4xt" Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.433866 4675 scope.go:117] "RemoveContainer" containerID="8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc" Nov 21 14:04:16 crc kubenswrapper[4675]: E1121 14:04:16.434495 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc\": container with ID starting with 8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc not found: ID does not exist" containerID="8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc" Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.434653 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc"} err="failed to get container status \"8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc\": rpc error: code = NotFound desc = could not find container \"8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc\": container with ID starting with 8d063f301e05967a254ed6b73afe30cdda9c5c9de1e06a50e6b69a1d67abb5cc not found: ID does not exist" Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.439504 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7498687b57-vr4xt"] Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.454389 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7498687b57-vr4xt"] Nov 21 14:04:16 crc kubenswrapper[4675]: I1121 14:04:16.865862 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fbe5346-a173-4dc8-97f3-800ada75bf1b" path="/var/lib/kubelet/pods/0fbe5346-a173-4dc8-97f3-800ada75bf1b/volumes" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.257256 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.297556 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-config-data\") pod \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.297604 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-combined-ca-bundle\") pod \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.297676 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-scripts\") pod \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.297817 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-public-tls-certs\") pod \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.297857 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6r8k\" (UniqueName: \"kubernetes.io/projected/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-kube-api-access-t6r8k\") pod \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.303516 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-scripts" (OuterVolumeSpecName: "scripts") pod "d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" (UID: "d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.303516 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-kube-api-access-t6r8k" (OuterVolumeSpecName: "kube-api-access-t6r8k") pod "d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" (UID: "d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c"). InnerVolumeSpecName "kube-api-access-t6r8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.380265 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" (UID: "d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.403181 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-internal-tls-certs\") pod \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\" (UID: \"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c\") " Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.403903 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.403920 4675 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.403929 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6r8k\" (UniqueName: \"kubernetes.io/projected/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-kube-api-access-t6r8k\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.445149 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-config-data" (OuterVolumeSpecName: "config-data") pod "d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" (UID: "d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.445521 4675 generic.go:334] "Generic (PLEG): container finished" podID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerID="fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875" exitCode=0 Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.445545 4675 generic.go:334] "Generic (PLEG): container finished" podID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerID="be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305" exitCode=0 Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.445667 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.446602 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c","Type":"ContainerDied","Data":"fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875"} Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.446632 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c","Type":"ContainerDied","Data":"be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305"} Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.446641 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c","Type":"ContainerDied","Data":"9470ace0c5ee6d1a4dc62749264a439ae3938f9875dcee2b5d5d88597d28d667"} Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.446655 4675 scope.go:117] "RemoveContainer" containerID="fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.448784 4675 generic.go:334] "Generic (PLEG): container finished" podID="021eb0fa-a9a9-4af1-bc66-8b868fa3c41c" containerID="fd2f54a306996762424764ad719e22da28acc73c3935833defd6e704a6e15feb" exitCode=0 Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.448839 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" event={"ID":"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c","Type":"ContainerDied","Data":"fd2f54a306996762424764ad719e22da28acc73c3935833defd6e704a6e15feb"} Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.474323 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" (UID: "d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.484015 4675 scope.go:117] "RemoveContainer" containerID="be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.495278 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" (UID: "d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.506090 4675 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.506127 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.506137 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.518012 4675 scope.go:117] "RemoveContainer" containerID="78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.541168 4675 scope.go:117] "RemoveContainer" containerID="2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.566187 4675 scope.go:117] "RemoveContainer" containerID="fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875" Nov 21 14:04:18 crc kubenswrapper[4675]: E1121 14:04:18.566734 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875\": container with ID starting with fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875 not found: ID does not exist" containerID="fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.566768 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875"} err="failed to get container status \"fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875\": rpc error: code = NotFound desc = could not find container \"fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875\": container with ID starting with fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875 not found: ID does not exist" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.566787 4675 scope.go:117] "RemoveContainer" containerID="be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305" Nov 21 14:04:18 crc kubenswrapper[4675]: E1121 14:04:18.567056 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305\": container with ID starting with be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305 not found: ID does not exist" containerID="be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.567090 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305"} err="failed to get container status \"be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305\": rpc error: code = NotFound desc = could not find container \"be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305\": container with ID starting with be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305 not found: ID does not exist" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.567103 4675 scope.go:117] "RemoveContainer" containerID="78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4" Nov 21 14:04:18 crc kubenswrapper[4675]: E1121 14:04:18.567411 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4\": container with ID starting with 78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4 not found: ID does not exist" containerID="78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.567429 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4"} err="failed to get container status \"78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4\": rpc error: code = NotFound desc = could not find container \"78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4\": container with ID starting with 78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4 not found: ID does not exist" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.567442 4675 scope.go:117] "RemoveContainer" containerID="2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1" Nov 21 14:04:18 crc kubenswrapper[4675]: E1121 14:04:18.567818 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1\": container with ID starting with 2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1 not found: ID does not exist" containerID="2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.567861 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1"} err="failed to get container status \"2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1\": rpc error: code = NotFound desc = could not find container \"2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1\": container with ID starting with 2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1 not found: ID does not exist" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.567898 4675 scope.go:117] "RemoveContainer" containerID="fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.569081 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875"} err="failed to get container status \"fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875\": rpc error: code = NotFound desc = could not find container \"fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875\": container with ID starting with fe5992fd9d7369bc1de81311d36ad473ad6a4d55e2b30c6aeb3d0fdfa11e3875 not found: ID does not exist" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.569100 4675 scope.go:117] "RemoveContainer" containerID="be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.569416 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305"} err="failed to get container status \"be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305\": rpc error: code = NotFound desc = could not find container \"be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305\": container with ID starting with be3f1ab444ef13032b946d319e5e513d007e861a39fb7de5a4d7a1c84709e305 not found: ID does not exist" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.569456 4675 scope.go:117] "RemoveContainer" containerID="78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.569733 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4"} err="failed to get container status \"78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4\": rpc error: code = NotFound desc = could not find container \"78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4\": container with ID starting with 78785c514176ebbd57174aa1d3976306eccdf42387e47b30f187ebcb8fa67ae4 not found: ID does not exist" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.569752 4675 scope.go:117] "RemoveContainer" containerID="2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.570010 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1"} err="failed to get container status \"2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1\": rpc error: code = NotFound desc = could not find container \"2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1\": container with ID starting with 2b9e1e5cdcdd7245bb50725e7035c9b760023f6d0e82d6eeac25ff91c58b32b1 not found: ID does not exist" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.785859 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.802684 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.815972 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 21 14:04:18 crc kubenswrapper[4675]: E1121 14:04:18.816674 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbe5346-a173-4dc8-97f3-800ada75bf1b" containerName="heat-engine" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.816698 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbe5346-a173-4dc8-97f3-800ada75bf1b" containerName="heat-engine" Nov 21 14:04:18 crc kubenswrapper[4675]: E1121 14:04:18.816721 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-api" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.816731 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-api" Nov 21 14:04:18 crc kubenswrapper[4675]: E1121 14:04:18.816751 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-evaluator" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.816759 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-evaluator" Nov 21 14:04:18 crc kubenswrapper[4675]: E1121 14:04:18.816774 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-notifier" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.816781 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-notifier" Nov 21 14:04:18 crc kubenswrapper[4675]: E1121 14:04:18.816818 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-listener" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.816826 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-listener" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.817123 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-notifier" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.817152 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-api" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.817166 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-listener" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.817188 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fbe5346-a173-4dc8-97f3-800ada75bf1b" containerName="heat-engine" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.817210 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" containerName="aodh-evaluator" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.819885 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.822718 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.823488 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-swnjz" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.825852 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.825889 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.825933 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.827526 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.874037 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c" path="/var/lib/kubelet/pods/d59e62a5-e2fb-4c56-a18b-98c0ed3ccf3c/volumes" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.916260 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.916334 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-public-tls-certs\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.917385 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-config-data\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.917737 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn2gj\" (UniqueName: \"kubernetes.io/projected/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-kube-api-access-jn2gj\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.917908 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-scripts\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:18 crc kubenswrapper[4675]: I1121 14:04:18.917978 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-internal-tls-certs\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.018967 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-internal-tls-certs\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.019052 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.019094 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-public-tls-certs\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.019200 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-config-data\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.019245 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn2gj\" (UniqueName: \"kubernetes.io/projected/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-kube-api-access-jn2gj\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.019293 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-scripts\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.023231 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-scripts\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.023985 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.024591 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-internal-tls-certs\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.025448 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-config-data\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.026329 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-public-tls-certs\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.052192 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn2gj\" (UniqueName: \"kubernetes.io/projected/e1986ade-c95f-42c9-9ae4-8518e89cb7b8-kube-api-access-jn2gj\") pod \"aodh-0\" (UID: \"e1986ade-c95f-42c9-9ae4-8518e89cb7b8\") " pod="openstack/aodh-0" Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.196898 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.693459 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 14:04:19 crc kubenswrapper[4675]: W1121 14:04:19.697208 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1986ade_c95f_42c9_9ae4_8518e89cb7b8.slice/crio-82f73c32bfcba04a56296c0b3ecda8597dbdf265ac9f8a9e27a3ff362e6e1db9 WatchSource:0}: Error finding container 82f73c32bfcba04a56296c0b3ecda8597dbdf265ac9f8a9e27a3ff362e6e1db9: Status 404 returned error can't find the container with id 82f73c32bfcba04a56296c0b3ecda8597dbdf265ac9f8a9e27a3ff362e6e1db9 Nov 21 14:04:19 crc kubenswrapper[4675]: I1121 14:04:19.850297 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.105330 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.264463 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bxgj\" (UniqueName: \"kubernetes.io/projected/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-kube-api-access-4bxgj\") pod \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\" (UID: \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\") " Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.264875 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-inventory\") pod \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\" (UID: \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\") " Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.264913 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-ssh-key\") pod \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\" (UID: \"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c\") " Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.276325 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-kube-api-access-4bxgj" (OuterVolumeSpecName: "kube-api-access-4bxgj") pod "021eb0fa-a9a9-4af1-bc66-8b868fa3c41c" (UID: "021eb0fa-a9a9-4af1-bc66-8b868fa3c41c"). InnerVolumeSpecName "kube-api-access-4bxgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.302214 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "021eb0fa-a9a9-4af1-bc66-8b868fa3c41c" (UID: "021eb0fa-a9a9-4af1-bc66-8b868fa3c41c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.330279 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-inventory" (OuterVolumeSpecName: "inventory") pod "021eb0fa-a9a9-4af1-bc66-8b868fa3c41c" (UID: "021eb0fa-a9a9-4af1-bc66-8b868fa3c41c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.367586 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bxgj\" (UniqueName: \"kubernetes.io/projected/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-kube-api-access-4bxgj\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.367614 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.367624 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/021eb0fa-a9a9-4af1-bc66-8b868fa3c41c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.545348 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.546559 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2xt7s" event={"ID":"021eb0fa-a9a9-4af1-bc66-8b868fa3c41c","Type":"ContainerDied","Data":"f83f7914977c8a6ec7aadea66c97d397c79ea21ff22ebd2d36cbdacf74589cb3"} Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.546595 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f83f7914977c8a6ec7aadea66c97d397c79ea21ff22ebd2d36cbdacf74589cb3" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.546613 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf"] Nov 21 14:04:20 crc kubenswrapper[4675]: E1121 14:04:20.547284 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021eb0fa-a9a9-4af1-bc66-8b868fa3c41c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.547305 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="021eb0fa-a9a9-4af1-bc66-8b868fa3c41c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.547548 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="021eb0fa-a9a9-4af1-bc66-8b868fa3c41c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.548665 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.556227 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.556480 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.557334 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.558506 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.562631 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf"] Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.564943 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e1986ade-c95f-42c9-9ae4-8518e89cb7b8","Type":"ContainerStarted","Data":"07c4020baebb4e9f3741c1e00fd160e198f307602fdcd48f7d2c3840cc6750ff"} Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.564985 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e1986ade-c95f-42c9-9ae4-8518e89cb7b8","Type":"ContainerStarted","Data":"82f73c32bfcba04a56296c0b3ecda8597dbdf265ac9f8a9e27a3ff362e6e1db9"} Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.572496 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"e71f77ed549004b966e9029533b80c3b91f1ac795adb8367df9590ead7dca39c"} Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.674153 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.674225 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.674586 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.674790 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrzjv\" (UniqueName: \"kubernetes.io/projected/fb55d1ca-c721-4bca-9a73-e01fa4da2008-kube-api-access-qrzjv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.777290 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrzjv\" (UniqueName: \"kubernetes.io/projected/fb55d1ca-c721-4bca-9a73-e01fa4da2008-kube-api-access-qrzjv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.777435 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.777463 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.777553 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.783948 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.784863 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.789560 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.793558 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrzjv\" (UniqueName: \"kubernetes.io/projected/fb55d1ca-c721-4bca-9a73-e01fa4da2008-kube-api-access-qrzjv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:04:20 crc kubenswrapper[4675]: I1121 14:04:20.889725 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:04:21 crc kubenswrapper[4675]: I1121 14:04:21.341796 4675 scope.go:117] "RemoveContainer" containerID="1bbb88bd9afbd086aca21c08e19da8b038a0c676dcc1d99e006575a2fb40e964" Nov 21 14:04:21 crc kubenswrapper[4675]: I1121 14:04:21.373975 4675 scope.go:117] "RemoveContainer" containerID="e1b35fa9a394b0b13af8de47ece6a09ec0bae5ab6aa95b2f724c589931e67ee9" Nov 21 14:04:21 crc kubenswrapper[4675]: I1121 14:04:21.401983 4675 scope.go:117] "RemoveContainer" containerID="95ab690219c8ccfa8fbf984c09390258d8325ac816acaf75fad2e6fd73dc6168" Nov 21 14:04:21 crc kubenswrapper[4675]: I1121 14:04:21.600726 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e1986ade-c95f-42c9-9ae4-8518e89cb7b8","Type":"ContainerStarted","Data":"0bcbc218b4c10d5585a917066d0aa3a207435ac6f5309a341a00b4b240da3c3a"} Nov 21 14:04:21 crc kubenswrapper[4675]: I1121 14:04:21.667605 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf"] Nov 21 14:04:22 crc kubenswrapper[4675]: I1121 14:04:22.616316 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e1986ade-c95f-42c9-9ae4-8518e89cb7b8","Type":"ContainerStarted","Data":"eb35bfb5c9e4358f9640e90b063ad32f46d89ad663ea9b2cfdb881e23382b6f0"} Nov 21 14:04:22 crc kubenswrapper[4675]: I1121 14:04:22.618247 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" event={"ID":"fb55d1ca-c721-4bca-9a73-e01fa4da2008","Type":"ContainerStarted","Data":"62196444d1af20ca0449c047245a2b5f32815d0c9e33abd99e73ad1174779f1e"} Nov 21 14:04:22 crc kubenswrapper[4675]: I1121 14:04:22.618287 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" event={"ID":"fb55d1ca-c721-4bca-9a73-e01fa4da2008","Type":"ContainerStarted","Data":"e65bc531b32cd6c7ec209797be1a5d8201861118f33b71fab7605458998f7315"} Nov 21 14:04:22 crc kubenswrapper[4675]: I1121 14:04:22.652770 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" podStartSLOduration=1.957490839 podStartE2EDuration="2.652742669s" podCreationTimestamp="2025-11-21 14:04:20 +0000 UTC" firstStartedPulling="2025-11-21 14:04:21.672541232 +0000 UTC m=+1938.398955959" lastFinishedPulling="2025-11-21 14:04:22.367793062 +0000 UTC m=+1939.094207789" observedRunningTime="2025-11-21 14:04:22.640456908 +0000 UTC m=+1939.366871645" watchObservedRunningTime="2025-11-21 14:04:22.652742669 +0000 UTC m=+1939.379157396" Nov 21 14:04:24 crc kubenswrapper[4675]: I1121 14:04:24.646197 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e1986ade-c95f-42c9-9ae4-8518e89cb7b8","Type":"ContainerStarted","Data":"fb197f11c60d4b0d7b9fe2223dd038cb534d72bff9e4f04295164ecbeee9f3a3"} Nov 21 14:04:24 crc kubenswrapper[4675]: I1121 14:04:24.673205 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.756099216 podStartE2EDuration="6.673183667s" podCreationTimestamp="2025-11-21 14:04:18 +0000 UTC" firstStartedPulling="2025-11-21 14:04:19.699780138 +0000 UTC m=+1936.426194865" lastFinishedPulling="2025-11-21 14:04:23.616864589 +0000 UTC m=+1940.343279316" observedRunningTime="2025-11-21 14:04:24.66536331 +0000 UTC m=+1941.391778037" watchObservedRunningTime="2025-11-21 14:04:24.673183667 +0000 UTC m=+1941.399598384" Nov 21 14:05:21 crc kubenswrapper[4675]: I1121 14:05:21.637835 4675 scope.go:117] "RemoveContainer" containerID="2f21b9737b3dc310046ef45cb62c0bfa007fee44b01d7166c51d06bba472e35b" Nov 21 14:06:46 crc kubenswrapper[4675]: I1121 14:06:46.136664 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:06:46 crc kubenswrapper[4675]: I1121 14:06:46.137241 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:06:50 crc kubenswrapper[4675]: I1121 14:06:50.047299 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d8dc-account-create-6fzjl"] Nov 21 14:06:50 crc kubenswrapper[4675]: I1121 14:06:50.063447 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d8dc-account-create-6fzjl"] Nov 21 14:06:50 crc kubenswrapper[4675]: I1121 14:06:50.075409 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-pstql"] Nov 21 14:06:50 crc kubenswrapper[4675]: I1121 14:06:50.089726 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-pstql"] Nov 21 14:06:50 crc kubenswrapper[4675]: I1121 14:06:50.863450 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2e23fca-5df9-4e92-a0c8-969fc4e1cca2" path="/var/lib/kubelet/pods/a2e23fca-5df9-4e92-a0c8-969fc4e1cca2/volumes" Nov 21 14:06:50 crc kubenswrapper[4675]: I1121 14:06:50.870366 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b882b154-acec-4468-84e0-bab76ab42c69" path="/var/lib/kubelet/pods/b882b154-acec-4468-84e0-bab76ab42c69/volumes" Nov 21 14:06:55 crc kubenswrapper[4675]: I1121 14:06:55.038972 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c3b3-account-create-nm4vf"] Nov 21 14:06:55 crc kubenswrapper[4675]: I1121 14:06:55.057231 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-8c4xz"] Nov 21 14:06:55 crc kubenswrapper[4675]: I1121 14:06:55.071471 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1495-account-create-nnsn8"] Nov 21 14:06:55 crc kubenswrapper[4675]: I1121 14:06:55.082881 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-8c4xz"] Nov 21 14:06:55 crc kubenswrapper[4675]: I1121 14:06:55.092017 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c3b3-account-create-nm4vf"] Nov 21 14:06:55 crc kubenswrapper[4675]: I1121 14:06:55.101158 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-mmwhh"] Nov 21 14:06:55 crc kubenswrapper[4675]: I1121 14:06:55.111153 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-568sp"] Nov 21 14:06:55 crc kubenswrapper[4675]: I1121 14:06:55.120656 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1495-account-create-nnsn8"] Nov 21 14:06:55 crc kubenswrapper[4675]: I1121 14:06:55.129868 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-1973-account-create-j7slr"] Nov 21 14:06:55 crc kubenswrapper[4675]: I1121 14:06:55.142562 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-mmwhh"] Nov 21 14:06:55 crc kubenswrapper[4675]: I1121 14:06:55.151874 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-568sp"] Nov 21 14:06:55 crc kubenswrapper[4675]: I1121 14:06:55.161619 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-1973-account-create-j7slr"] Nov 21 14:06:56 crc kubenswrapper[4675]: I1121 14:06:56.865223 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02708edd-65ef-4cc3-9d43-757a138f4028" path="/var/lib/kubelet/pods/02708edd-65ef-4cc3-9d43-757a138f4028/volumes" Nov 21 14:06:56 crc kubenswrapper[4675]: I1121 14:06:56.867970 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e652d6b-549f-4f1f-a082-34fb47c60cc0" path="/var/lib/kubelet/pods/4e652d6b-549f-4f1f-a082-34fb47c60cc0/volumes" Nov 21 14:06:56 crc kubenswrapper[4675]: I1121 14:06:56.871523 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="636cfc25-1856-420d-a4e3-d906b44c3751" path="/var/lib/kubelet/pods/636cfc25-1856-420d-a4e3-d906b44c3751/volumes" Nov 21 14:06:56 crc kubenswrapper[4675]: I1121 14:06:56.874650 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74e3998f-742b-4adb-9f05-6cb0a77065ef" path="/var/lib/kubelet/pods/74e3998f-742b-4adb-9f05-6cb0a77065ef/volumes" Nov 21 14:06:56 crc kubenswrapper[4675]: I1121 14:06:56.876015 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caac7346-06cd-4263-b6aa-fa0ac48d8442" path="/var/lib/kubelet/pods/caac7346-06cd-4263-b6aa-fa0ac48d8442/volumes" Nov 21 14:06:56 crc kubenswrapper[4675]: I1121 14:06:56.880307 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6d3dce6-9b93-48fb-b51e-203e3883c8cd" path="/var/lib/kubelet/pods/e6d3dce6-9b93-48fb-b51e-203e3883c8cd/volumes" Nov 21 14:07:03 crc kubenswrapper[4675]: I1121 14:07:03.046148 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq"] Nov 21 14:07:03 crc kubenswrapper[4675]: I1121 14:07:03.060681 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-a877-account-create-qflzk"] Nov 21 14:07:03 crc kubenswrapper[4675]: I1121 14:07:03.074174 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rjxdq"] Nov 21 14:07:03 crc kubenswrapper[4675]: I1121 14:07:03.086211 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-a877-account-create-qflzk"] Nov 21 14:07:04 crc kubenswrapper[4675]: I1121 14:07:04.869852 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0084c6f-89fa-48fe-84f3-3d2805d9bf51" path="/var/lib/kubelet/pods/e0084c6f-89fa-48fe-84f3-3d2805d9bf51/volumes" Nov 21 14:07:04 crc kubenswrapper[4675]: I1121 14:07:04.872909 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6615cb0-ff37-4c23-bd5c-5572486a0db4" path="/var/lib/kubelet/pods/f6615cb0-ff37-4c23-bd5c-5572486a0db4/volumes" Nov 21 14:07:16 crc kubenswrapper[4675]: I1121 14:07:16.136728 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:07:16 crc kubenswrapper[4675]: I1121 14:07:16.137268 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:07:21 crc kubenswrapper[4675]: I1121 14:07:21.741595 4675 scope.go:117] "RemoveContainer" containerID="3998a59caa25b8b6cada834813c1d8186a9b9e1a99d2a2e1d57ddc343932acb8" Nov 21 14:07:21 crc kubenswrapper[4675]: I1121 14:07:21.782819 4675 scope.go:117] "RemoveContainer" containerID="fbcbe7a400f7e3fcb982777c98dc5f95494214fdb691b546224449f1438577e2" Nov 21 14:07:21 crc kubenswrapper[4675]: I1121 14:07:21.846547 4675 scope.go:117] "RemoveContainer" containerID="40d9f14d72ca63a15652f3e8ccd3cd91a5bae8830ac2467973b19708838d70d3" Nov 21 14:07:21 crc kubenswrapper[4675]: I1121 14:07:21.895492 4675 scope.go:117] "RemoveContainer" containerID="5f369fc34cac5033406a28cb115428cf2f11d68fc01f3f2f46434705732ee7c5" Nov 21 14:07:21 crc kubenswrapper[4675]: I1121 14:07:21.947228 4675 scope.go:117] "RemoveContainer" containerID="56c128e74c725c7997ef213f1a20a3a3d1d56189608e4fdeb62f86b898de5815" Nov 21 14:07:21 crc kubenswrapper[4675]: I1121 14:07:21.998194 4675 scope.go:117] "RemoveContainer" containerID="347faf4cffe96e3f3472d972ddb45c9e2d1c9a619792db9c363cda22b7f7c448" Nov 21 14:07:22 crc kubenswrapper[4675]: I1121 14:07:22.023201 4675 scope.go:117] "RemoveContainer" containerID="586f640dbb478a252cac83be920826e464e3d45af122ed690f6414817b32085b" Nov 21 14:07:22 crc kubenswrapper[4675]: I1121 14:07:22.050631 4675 scope.go:117] "RemoveContainer" containerID="d27c8cd239ee49134d31c0a41e767f7ed7f68ecba34264b13eb5af1bc841b9e8" Nov 21 14:07:22 crc kubenswrapper[4675]: I1121 14:07:22.115864 4675 scope.go:117] "RemoveContainer" containerID="673412236fa63be3374f3656756261269023dd71be2cb8826ab22f4b64c72699" Nov 21 14:07:22 crc kubenswrapper[4675]: I1121 14:07:22.142105 4675 scope.go:117] "RemoveContainer" containerID="ee09c8fa8357a2a90bc002b059de1a00d53db1e4bcab58e1f1f609b7f5eb901d" Nov 21 14:07:22 crc kubenswrapper[4675]: I1121 14:07:22.180076 4675 scope.go:117] "RemoveContainer" containerID="e54054341bcef89e1b6f59450cd3b498799ef59d19d01ea2dda351cc2a2ec858" Nov 21 14:07:22 crc kubenswrapper[4675]: I1121 14:07:22.207806 4675 scope.go:117] "RemoveContainer" containerID="00049ab79945c27df254ed101e1f3f8eda94ee825df3c55fec49df16a0709369" Nov 21 14:07:22 crc kubenswrapper[4675]: I1121 14:07:22.236516 4675 scope.go:117] "RemoveContainer" containerID="513fdb6b46f2d9df30ce94a8db3dcb87d07114073ddad670f8d80b40c9c07d5c" Nov 21 14:07:22 crc kubenswrapper[4675]: I1121 14:07:22.265409 4675 scope.go:117] "RemoveContainer" containerID="4dad58cf810ce27540070cb3bc34a1d2d7bd84c47a22fc747320b0e579551c52" Nov 21 14:07:22 crc kubenswrapper[4675]: I1121 14:07:22.298372 4675 scope.go:117] "RemoveContainer" containerID="31af08730de271061b0e3fc25aea01bb35d25833d62356d931cd04c6a506708a" Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.110339 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-822b-account-create-khlxc"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.123171 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2480-account-create-94rfs"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.133698 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-gf6k8"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.143446 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3930-account-create-nfddg"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.153234 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-822b-account-create-khlxc"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.162236 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-szwd2"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.172117 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-fft96"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.182290 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-2lt6m"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.192176 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-szwd2"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.201829 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2480-account-create-94rfs"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.212774 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3930-account-create-nfddg"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.223567 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-gf6k8"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.237471 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-fft96"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.249479 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-2lt6m"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.265824 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6fc9-account-create-htjrd"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.276197 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-6fc9-account-create-htjrd"] Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.873935 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="069dca18-b43c-4d0a-a088-bd2c11e2e8d8" path="/var/lib/kubelet/pods/069dca18-b43c-4d0a-a088-bd2c11e2e8d8/volumes" Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.875394 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="106316ae-7bc1-4693-a06d-9d93516af3a4" path="/var/lib/kubelet/pods/106316ae-7bc1-4693-a06d-9d93516af3a4/volumes" Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.876474 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51637b2d-14b5-4bb3-95ee-e2cafe7780e2" path="/var/lib/kubelet/pods/51637b2d-14b5-4bb3-95ee-e2cafe7780e2/volumes" Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.877194 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85230bf1-fd16-4f24-9f1f-13d6a960db2f" path="/var/lib/kubelet/pods/85230bf1-fd16-4f24-9f1f-13d6a960db2f/volumes" Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.878302 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86850670-f736-42b5-87ed-147d7a572d73" path="/var/lib/kubelet/pods/86850670-f736-42b5-87ed-147d7a572d73/volumes" Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.878969 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5a7071c-2a7a-431a-b580-a4f6038444b6" path="/var/lib/kubelet/pods/a5a7071c-2a7a-431a-b580-a4f6038444b6/volumes" Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.879634 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9a259f8-81fa-445f-b17b-9d12b0114a50" path="/var/lib/kubelet/pods/c9a259f8-81fa-445f-b17b-9d12b0114a50/volumes" Nov 21 14:07:32 crc kubenswrapper[4675]: I1121 14:07:32.880986 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5e5671c-9388-46fd-b5f2-7c2bc71db709" path="/var/lib/kubelet/pods/d5e5671c-9388-46fd-b5f2-7c2bc71db709/volumes" Nov 21 14:07:37 crc kubenswrapper[4675]: I1121 14:07:37.964545 4675 generic.go:334] "Generic (PLEG): container finished" podID="fb55d1ca-c721-4bca-9a73-e01fa4da2008" containerID="62196444d1af20ca0449c047245a2b5f32815d0c9e33abd99e73ad1174779f1e" exitCode=0 Nov 21 14:07:37 crc kubenswrapper[4675]: I1121 14:07:37.964668 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" event={"ID":"fb55d1ca-c721-4bca-9a73-e01fa4da2008","Type":"ContainerDied","Data":"62196444d1af20ca0449c047245a2b5f32815d0c9e33abd99e73ad1174779f1e"} Nov 21 14:07:39 crc kubenswrapper[4675]: I1121 14:07:39.562989 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:07:39 crc kubenswrapper[4675]: I1121 14:07:39.681794 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-bootstrap-combined-ca-bundle\") pod \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " Nov 21 14:07:39 crc kubenswrapper[4675]: I1121 14:07:39.682347 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-inventory\") pod \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " Nov 21 14:07:39 crc kubenswrapper[4675]: I1121 14:07:39.682520 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-ssh-key\") pod \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " Nov 21 14:07:39 crc kubenswrapper[4675]: I1121 14:07:39.682819 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrzjv\" (UniqueName: \"kubernetes.io/projected/fb55d1ca-c721-4bca-9a73-e01fa4da2008-kube-api-access-qrzjv\") pod \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\" (UID: \"fb55d1ca-c721-4bca-9a73-e01fa4da2008\") " Nov 21 14:07:39 crc kubenswrapper[4675]: I1121 14:07:39.689916 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb55d1ca-c721-4bca-9a73-e01fa4da2008-kube-api-access-qrzjv" (OuterVolumeSpecName: "kube-api-access-qrzjv") pod "fb55d1ca-c721-4bca-9a73-e01fa4da2008" (UID: "fb55d1ca-c721-4bca-9a73-e01fa4da2008"). InnerVolumeSpecName "kube-api-access-qrzjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:07:39 crc kubenswrapper[4675]: I1121 14:07:39.689967 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fb55d1ca-c721-4bca-9a73-e01fa4da2008" (UID: "fb55d1ca-c721-4bca-9a73-e01fa4da2008"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:07:39 crc kubenswrapper[4675]: I1121 14:07:39.737151 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fb55d1ca-c721-4bca-9a73-e01fa4da2008" (UID: "fb55d1ca-c721-4bca-9a73-e01fa4da2008"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:07:39 crc kubenswrapper[4675]: I1121 14:07:39.738399 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-inventory" (OuterVolumeSpecName: "inventory") pod "fb55d1ca-c721-4bca-9a73-e01fa4da2008" (UID: "fb55d1ca-c721-4bca-9a73-e01fa4da2008"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:07:39 crc kubenswrapper[4675]: I1121 14:07:39.786965 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrzjv\" (UniqueName: \"kubernetes.io/projected/fb55d1ca-c721-4bca-9a73-e01fa4da2008-kube-api-access-qrzjv\") on node \"crc\" DevicePath \"\"" Nov 21 14:07:39 crc kubenswrapper[4675]: I1121 14:07:39.787031 4675 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:07:39 crc kubenswrapper[4675]: I1121 14:07:39.787043 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:07:39 crc kubenswrapper[4675]: I1121 14:07:39.787054 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb55d1ca-c721-4bca-9a73-e01fa4da2008-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.000814 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" event={"ID":"fb55d1ca-c721-4bca-9a73-e01fa4da2008","Type":"ContainerDied","Data":"e65bc531b32cd6c7ec209797be1a5d8201861118f33b71fab7605458998f7315"} Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.000891 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e65bc531b32cd6c7ec209797be1a5d8201861118f33b71fab7605458998f7315" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.001029 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.062495 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-wrfwv"] Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.088715 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-wrfwv"] Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.109341 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9"] Nov 21 14:07:40 crc kubenswrapper[4675]: E1121 14:07:40.109809 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb55d1ca-c721-4bca-9a73-e01fa4da2008" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.109835 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb55d1ca-c721-4bca-9a73-e01fa4da2008" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.110170 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb55d1ca-c721-4bca-9a73-e01fa4da2008" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.111317 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.113940 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.114100 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.114196 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.114402 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.121878 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9"] Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.309492 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3366490-72da-4662-a609-d3fd320bac49-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9\" (UID: \"a3366490-72da-4662-a609-d3fd320bac49\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.309854 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqs8z\" (UniqueName: \"kubernetes.io/projected/a3366490-72da-4662-a609-d3fd320bac49-kube-api-access-mqs8z\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9\" (UID: \"a3366490-72da-4662-a609-d3fd320bac49\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.309935 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3366490-72da-4662-a609-d3fd320bac49-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9\" (UID: \"a3366490-72da-4662-a609-d3fd320bac49\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.411558 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3366490-72da-4662-a609-d3fd320bac49-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9\" (UID: \"a3366490-72da-4662-a609-d3fd320bac49\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.411730 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqs8z\" (UniqueName: \"kubernetes.io/projected/a3366490-72da-4662-a609-d3fd320bac49-kube-api-access-mqs8z\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9\" (UID: \"a3366490-72da-4662-a609-d3fd320bac49\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.411784 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3366490-72da-4662-a609-d3fd320bac49-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9\" (UID: \"a3366490-72da-4662-a609-d3fd320bac49\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.415519 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3366490-72da-4662-a609-d3fd320bac49-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9\" (UID: \"a3366490-72da-4662-a609-d3fd320bac49\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.415787 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3366490-72da-4662-a609-d3fd320bac49-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9\" (UID: \"a3366490-72da-4662-a609-d3fd320bac49\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.435397 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqs8z\" (UniqueName: \"kubernetes.io/projected/a3366490-72da-4662-a609-d3fd320bac49-kube-api-access-mqs8z\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9\" (UID: \"a3366490-72da-4662-a609-d3fd320bac49\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.735496 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" Nov 21 14:07:40 crc kubenswrapper[4675]: I1121 14:07:40.862868 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f97708f-a116-4805-b085-d887c811b56a" path="/var/lib/kubelet/pods/8f97708f-a116-4805-b085-d887c811b56a/volumes" Nov 21 14:07:41 crc kubenswrapper[4675]: I1121 14:07:41.299360 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9"] Nov 21 14:07:41 crc kubenswrapper[4675]: I1121 14:07:41.308018 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:07:42 crc kubenswrapper[4675]: I1121 14:07:42.029314 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" event={"ID":"a3366490-72da-4662-a609-d3fd320bac49","Type":"ContainerStarted","Data":"59428c2038e32182f353fbd2e613bb055430e6584a409e25081ed36580ad71e6"} Nov 21 14:07:42 crc kubenswrapper[4675]: I1121 14:07:42.029872 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" event={"ID":"a3366490-72da-4662-a609-d3fd320bac49","Type":"ContainerStarted","Data":"420fa4db02f58f9d14ad3f24f9e7385179787fec5be2085ca77baf1b38365adb"} Nov 21 14:07:42 crc kubenswrapper[4675]: I1121 14:07:42.044398 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" podStartSLOduration=1.587476983 podStartE2EDuration="2.04437916s" podCreationTimestamp="2025-11-21 14:07:40 +0000 UTC" firstStartedPulling="2025-11-21 14:07:41.307667963 +0000 UTC m=+2138.034082700" lastFinishedPulling="2025-11-21 14:07:41.76457014 +0000 UTC m=+2138.490984877" observedRunningTime="2025-11-21 14:07:42.043004945 +0000 UTC m=+2138.769419682" watchObservedRunningTime="2025-11-21 14:07:42.04437916 +0000 UTC m=+2138.770793887" Nov 21 14:07:46 crc kubenswrapper[4675]: I1121 14:07:46.059837 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-8d6qh"] Nov 21 14:07:46 crc kubenswrapper[4675]: I1121 14:07:46.068361 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-8d6qh"] Nov 21 14:07:46 crc kubenswrapper[4675]: I1121 14:07:46.136262 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:07:46 crc kubenswrapper[4675]: I1121 14:07:46.136345 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:07:46 crc kubenswrapper[4675]: I1121 14:07:46.136390 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 14:07:46 crc kubenswrapper[4675]: I1121 14:07:46.137298 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e71f77ed549004b966e9029533b80c3b91f1ac795adb8367df9590ead7dca39c"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:07:46 crc kubenswrapper[4675]: I1121 14:07:46.137361 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://e71f77ed549004b966e9029533b80c3b91f1ac795adb8367df9590ead7dca39c" gracePeriod=600 Nov 21 14:07:46 crc kubenswrapper[4675]: I1121 14:07:46.866234 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e21ce4f-da1d-4f89-8f41-6bb22c247d04" path="/var/lib/kubelet/pods/8e21ce4f-da1d-4f89-8f41-6bb22c247d04/volumes" Nov 21 14:07:47 crc kubenswrapper[4675]: I1121 14:07:47.105820 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="e71f77ed549004b966e9029533b80c3b91f1ac795adb8367df9590ead7dca39c" exitCode=0 Nov 21 14:07:47 crc kubenswrapper[4675]: I1121 14:07:47.105872 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"e71f77ed549004b966e9029533b80c3b91f1ac795adb8367df9590ead7dca39c"} Nov 21 14:07:47 crc kubenswrapper[4675]: I1121 14:07:47.105903 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b"} Nov 21 14:07:47 crc kubenswrapper[4675]: I1121 14:07:47.105922 4675 scope.go:117] "RemoveContainer" containerID="8333e139ca1bf4b88d7f4e25591c6c5f2272dc9b212a5548d99b1abb84b44aa4" Nov 21 14:08:22 crc kubenswrapper[4675]: I1121 14:08:22.557648 4675 scope.go:117] "RemoveContainer" containerID="ecc9dafcfd46069f8d80d1325eabea0071123d924cda7062d60f64f8265eafad" Nov 21 14:08:22 crc kubenswrapper[4675]: I1121 14:08:22.585561 4675 scope.go:117] "RemoveContainer" containerID="5d9bce0667bbbb8900a4362644a545488eb7a4cda512f80404c74fd6dc385134" Nov 21 14:08:22 crc kubenswrapper[4675]: I1121 14:08:22.643873 4675 scope.go:117] "RemoveContainer" containerID="b5a0619eed438ef5b9b63dae0c70eef53f6c47850b0002414a5d7edd4e8383d2" Nov 21 14:08:22 crc kubenswrapper[4675]: I1121 14:08:22.700201 4675 scope.go:117] "RemoveContainer" containerID="3d617bcfd750d2c29758c22e11f3f439fcc593aa559c6c917543d88ab1b936fe" Nov 21 14:08:22 crc kubenswrapper[4675]: I1121 14:08:22.756636 4675 scope.go:117] "RemoveContainer" containerID="ec2d13b0fc8869495c1c60be2c0e3b807628a3cb36567765d9980feed0ab3faf" Nov 21 14:08:22 crc kubenswrapper[4675]: I1121 14:08:22.816707 4675 scope.go:117] "RemoveContainer" containerID="8e93cc4a5b28b1c6d34e8d66813c02fb2d8fa0a457888588fb3a1e0345b70c2c" Nov 21 14:08:22 crc kubenswrapper[4675]: I1121 14:08:22.905306 4675 scope.go:117] "RemoveContainer" containerID="2a2174c5ed73f358043c151d81c55c29c2a00d4baf484e6e6a2636c0ccade629" Nov 21 14:08:22 crc kubenswrapper[4675]: I1121 14:08:22.931283 4675 scope.go:117] "RemoveContainer" containerID="7ac16f0c69bc28cf9b0096e41a231efb4ec8780f387b78ba3cfd646fe897286d" Nov 21 14:08:22 crc kubenswrapper[4675]: I1121 14:08:22.955249 4675 scope.go:117] "RemoveContainer" containerID="fac5d61a0b3deccc88c8251f86febb417303645964ed1a5d65724ad542e0e78a" Nov 21 14:08:22 crc kubenswrapper[4675]: I1121 14:08:22.973059 4675 scope.go:117] "RemoveContainer" containerID="a10c2f6749aca891fc816f61787aecbfd346faeeca768c86f85b7d1504f01496" Nov 21 14:08:23 crc kubenswrapper[4675]: I1121 14:08:23.227349 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-285rf"] Nov 21 14:08:23 crc kubenswrapper[4675]: I1121 14:08:23.230521 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:23 crc kubenswrapper[4675]: I1121 14:08:23.240230 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-285rf"] Nov 21 14:08:23 crc kubenswrapper[4675]: I1121 14:08:23.317788 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d14494f8-b884-42cb-a341-fc69969159a3-utilities\") pod \"redhat-marketplace-285rf\" (UID: \"d14494f8-b884-42cb-a341-fc69969159a3\") " pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:23 crc kubenswrapper[4675]: I1121 14:08:23.318143 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zd2r\" (UniqueName: \"kubernetes.io/projected/d14494f8-b884-42cb-a341-fc69969159a3-kube-api-access-2zd2r\") pod \"redhat-marketplace-285rf\" (UID: \"d14494f8-b884-42cb-a341-fc69969159a3\") " pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:23 crc kubenswrapper[4675]: I1121 14:08:23.318954 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d14494f8-b884-42cb-a341-fc69969159a3-catalog-content\") pod \"redhat-marketplace-285rf\" (UID: \"d14494f8-b884-42cb-a341-fc69969159a3\") " pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:23 crc kubenswrapper[4675]: I1121 14:08:23.422555 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d14494f8-b884-42cb-a341-fc69969159a3-catalog-content\") pod \"redhat-marketplace-285rf\" (UID: \"d14494f8-b884-42cb-a341-fc69969159a3\") " pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:23 crc kubenswrapper[4675]: I1121 14:08:23.422688 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d14494f8-b884-42cb-a341-fc69969159a3-utilities\") pod \"redhat-marketplace-285rf\" (UID: \"d14494f8-b884-42cb-a341-fc69969159a3\") " pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:23 crc kubenswrapper[4675]: I1121 14:08:23.422853 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zd2r\" (UniqueName: \"kubernetes.io/projected/d14494f8-b884-42cb-a341-fc69969159a3-kube-api-access-2zd2r\") pod \"redhat-marketplace-285rf\" (UID: \"d14494f8-b884-42cb-a341-fc69969159a3\") " pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:23 crc kubenswrapper[4675]: I1121 14:08:23.423273 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d14494f8-b884-42cb-a341-fc69969159a3-utilities\") pod \"redhat-marketplace-285rf\" (UID: \"d14494f8-b884-42cb-a341-fc69969159a3\") " pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:23 crc kubenswrapper[4675]: I1121 14:08:23.423287 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d14494f8-b884-42cb-a341-fc69969159a3-catalog-content\") pod \"redhat-marketplace-285rf\" (UID: \"d14494f8-b884-42cb-a341-fc69969159a3\") " pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:23 crc kubenswrapper[4675]: I1121 14:08:23.442979 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zd2r\" (UniqueName: \"kubernetes.io/projected/d14494f8-b884-42cb-a341-fc69969159a3-kube-api-access-2zd2r\") pod \"redhat-marketplace-285rf\" (UID: \"d14494f8-b884-42cb-a341-fc69969159a3\") " pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:23 crc kubenswrapper[4675]: I1121 14:08:23.561025 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:24 crc kubenswrapper[4675]: I1121 14:08:24.036534 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-285rf"] Nov 21 14:08:24 crc kubenswrapper[4675]: I1121 14:08:24.543139 4675 generic.go:334] "Generic (PLEG): container finished" podID="d14494f8-b884-42cb-a341-fc69969159a3" containerID="6e13964836138cd4fcb8ca86a50f931c300f70dc1f3bad50756f1c94bac28904" exitCode=0 Nov 21 14:08:24 crc kubenswrapper[4675]: I1121 14:08:24.543184 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-285rf" event={"ID":"d14494f8-b884-42cb-a341-fc69969159a3","Type":"ContainerDied","Data":"6e13964836138cd4fcb8ca86a50f931c300f70dc1f3bad50756f1c94bac28904"} Nov 21 14:08:24 crc kubenswrapper[4675]: I1121 14:08:24.543507 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-285rf" event={"ID":"d14494f8-b884-42cb-a341-fc69969159a3","Type":"ContainerStarted","Data":"0218648b78a8aa5b1bc68845469ff4fd75ea19d07987e3bb058d00ab8c95b108"} Nov 21 14:08:26 crc kubenswrapper[4675]: I1121 14:08:26.574142 4675 generic.go:334] "Generic (PLEG): container finished" podID="d14494f8-b884-42cb-a341-fc69969159a3" containerID="c1dc1389ce738f979364cc3e5e4d540bb721b4d00219a398604939ac4dbe7747" exitCode=0 Nov 21 14:08:26 crc kubenswrapper[4675]: I1121 14:08:26.574276 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-285rf" event={"ID":"d14494f8-b884-42cb-a341-fc69969159a3","Type":"ContainerDied","Data":"c1dc1389ce738f979364cc3e5e4d540bb721b4d00219a398604939ac4dbe7747"} Nov 21 14:08:27 crc kubenswrapper[4675]: I1121 14:08:27.585958 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-285rf" event={"ID":"d14494f8-b884-42cb-a341-fc69969159a3","Type":"ContainerStarted","Data":"bbfb7bd4033604ea1861c30fe9da53cf9ebd036589570be74de116bbcd75acec"} Nov 21 14:08:27 crc kubenswrapper[4675]: I1121 14:08:27.607110 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-285rf" podStartSLOduration=2.144529078 podStartE2EDuration="4.607092476s" podCreationTimestamp="2025-11-21 14:08:23 +0000 UTC" firstStartedPulling="2025-11-21 14:08:24.54547786 +0000 UTC m=+2181.271892587" lastFinishedPulling="2025-11-21 14:08:27.008041258 +0000 UTC m=+2183.734455985" observedRunningTime="2025-11-21 14:08:27.603083324 +0000 UTC m=+2184.329498051" watchObservedRunningTime="2025-11-21 14:08:27.607092476 +0000 UTC m=+2184.333507203" Nov 21 14:08:28 crc kubenswrapper[4675]: I1121 14:08:28.052669 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-2jx6q"] Nov 21 14:08:28 crc kubenswrapper[4675]: I1121 14:08:28.064574 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-2jx6q"] Nov 21 14:08:28 crc kubenswrapper[4675]: I1121 14:08:28.863688 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4" path="/var/lib/kubelet/pods/f4b22aab-d6a1-49b2-a8cb-4d561e2b58c4/volumes" Nov 21 14:08:29 crc kubenswrapper[4675]: I1121 14:08:29.031705 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-g725b"] Nov 21 14:08:29 crc kubenswrapper[4675]: I1121 14:08:29.042134 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-g725b"] Nov 21 14:08:30 crc kubenswrapper[4675]: I1121 14:08:30.862392 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b87d2cb3-6cf6-4f8e-ad16-021304428c63" path="/var/lib/kubelet/pods/b87d2cb3-6cf6-4f8e-ad16-021304428c63/volumes" Nov 21 14:08:32 crc kubenswrapper[4675]: I1121 14:08:32.042647 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wgt2v"] Nov 21 14:08:32 crc kubenswrapper[4675]: I1121 14:08:32.052795 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wgt2v"] Nov 21 14:08:32 crc kubenswrapper[4675]: I1121 14:08:32.862502 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc50afd7-32f7-4d99-9952-81186547313e" path="/var/lib/kubelet/pods/fc50afd7-32f7-4d99-9952-81186547313e/volumes" Nov 21 14:08:33 crc kubenswrapper[4675]: I1121 14:08:33.562332 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:33 crc kubenswrapper[4675]: I1121 14:08:33.562384 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:33 crc kubenswrapper[4675]: I1121 14:08:33.611191 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:33 crc kubenswrapper[4675]: I1121 14:08:33.696592 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:33 crc kubenswrapper[4675]: I1121 14:08:33.852057 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-285rf"] Nov 21 14:08:35 crc kubenswrapper[4675]: I1121 14:08:35.668694 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-285rf" podUID="d14494f8-b884-42cb-a341-fc69969159a3" containerName="registry-server" containerID="cri-o://bbfb7bd4033604ea1861c30fe9da53cf9ebd036589570be74de116bbcd75acec" gracePeriod=2 Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.182408 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.344699 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d14494f8-b884-42cb-a341-fc69969159a3-utilities\") pod \"d14494f8-b884-42cb-a341-fc69969159a3\" (UID: \"d14494f8-b884-42cb-a341-fc69969159a3\") " Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.344841 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zd2r\" (UniqueName: \"kubernetes.io/projected/d14494f8-b884-42cb-a341-fc69969159a3-kube-api-access-2zd2r\") pod \"d14494f8-b884-42cb-a341-fc69969159a3\" (UID: \"d14494f8-b884-42cb-a341-fc69969159a3\") " Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.344899 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d14494f8-b884-42cb-a341-fc69969159a3-catalog-content\") pod \"d14494f8-b884-42cb-a341-fc69969159a3\" (UID: \"d14494f8-b884-42cb-a341-fc69969159a3\") " Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.345800 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d14494f8-b884-42cb-a341-fc69969159a3-utilities" (OuterVolumeSpecName: "utilities") pod "d14494f8-b884-42cb-a341-fc69969159a3" (UID: "d14494f8-b884-42cb-a341-fc69969159a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.354416 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d14494f8-b884-42cb-a341-fc69969159a3-kube-api-access-2zd2r" (OuterVolumeSpecName: "kube-api-access-2zd2r") pod "d14494f8-b884-42cb-a341-fc69969159a3" (UID: "d14494f8-b884-42cb-a341-fc69969159a3"). InnerVolumeSpecName "kube-api-access-2zd2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.398020 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d14494f8-b884-42cb-a341-fc69969159a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d14494f8-b884-42cb-a341-fc69969159a3" (UID: "d14494f8-b884-42cb-a341-fc69969159a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.448130 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d14494f8-b884-42cb-a341-fc69969159a3-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.448172 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zd2r\" (UniqueName: \"kubernetes.io/projected/d14494f8-b884-42cb-a341-fc69969159a3-kube-api-access-2zd2r\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.448186 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d14494f8-b884-42cb-a341-fc69969159a3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.680643 4675 generic.go:334] "Generic (PLEG): container finished" podID="d14494f8-b884-42cb-a341-fc69969159a3" containerID="bbfb7bd4033604ea1861c30fe9da53cf9ebd036589570be74de116bbcd75acec" exitCode=0 Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.680689 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-285rf" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.680765 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-285rf" event={"ID":"d14494f8-b884-42cb-a341-fc69969159a3","Type":"ContainerDied","Data":"bbfb7bd4033604ea1861c30fe9da53cf9ebd036589570be74de116bbcd75acec"} Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.681135 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-285rf" event={"ID":"d14494f8-b884-42cb-a341-fc69969159a3","Type":"ContainerDied","Data":"0218648b78a8aa5b1bc68845469ff4fd75ea19d07987e3bb058d00ab8c95b108"} Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.681162 4675 scope.go:117] "RemoveContainer" containerID="bbfb7bd4033604ea1861c30fe9da53cf9ebd036589570be74de116bbcd75acec" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.729748 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-285rf"] Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.729910 4675 scope.go:117] "RemoveContainer" containerID="c1dc1389ce738f979364cc3e5e4d540bb721b4d00219a398604939ac4dbe7747" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.740626 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-285rf"] Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.748413 4675 scope.go:117] "RemoveContainer" containerID="6e13964836138cd4fcb8ca86a50f931c300f70dc1f3bad50756f1c94bac28904" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.803333 4675 scope.go:117] "RemoveContainer" containerID="bbfb7bd4033604ea1861c30fe9da53cf9ebd036589570be74de116bbcd75acec" Nov 21 14:08:36 crc kubenswrapper[4675]: E1121 14:08:36.803867 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbfb7bd4033604ea1861c30fe9da53cf9ebd036589570be74de116bbcd75acec\": container with ID starting with bbfb7bd4033604ea1861c30fe9da53cf9ebd036589570be74de116bbcd75acec not found: ID does not exist" containerID="bbfb7bd4033604ea1861c30fe9da53cf9ebd036589570be74de116bbcd75acec" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.803894 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbfb7bd4033604ea1861c30fe9da53cf9ebd036589570be74de116bbcd75acec"} err="failed to get container status \"bbfb7bd4033604ea1861c30fe9da53cf9ebd036589570be74de116bbcd75acec\": rpc error: code = NotFound desc = could not find container \"bbfb7bd4033604ea1861c30fe9da53cf9ebd036589570be74de116bbcd75acec\": container with ID starting with bbfb7bd4033604ea1861c30fe9da53cf9ebd036589570be74de116bbcd75acec not found: ID does not exist" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.803914 4675 scope.go:117] "RemoveContainer" containerID="c1dc1389ce738f979364cc3e5e4d540bb721b4d00219a398604939ac4dbe7747" Nov 21 14:08:36 crc kubenswrapper[4675]: E1121 14:08:36.804181 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1dc1389ce738f979364cc3e5e4d540bb721b4d00219a398604939ac4dbe7747\": container with ID starting with c1dc1389ce738f979364cc3e5e4d540bb721b4d00219a398604939ac4dbe7747 not found: ID does not exist" containerID="c1dc1389ce738f979364cc3e5e4d540bb721b4d00219a398604939ac4dbe7747" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.804203 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1dc1389ce738f979364cc3e5e4d540bb721b4d00219a398604939ac4dbe7747"} err="failed to get container status \"c1dc1389ce738f979364cc3e5e4d540bb721b4d00219a398604939ac4dbe7747\": rpc error: code = NotFound desc = could not find container \"c1dc1389ce738f979364cc3e5e4d540bb721b4d00219a398604939ac4dbe7747\": container with ID starting with c1dc1389ce738f979364cc3e5e4d540bb721b4d00219a398604939ac4dbe7747 not found: ID does not exist" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.804216 4675 scope.go:117] "RemoveContainer" containerID="6e13964836138cd4fcb8ca86a50f931c300f70dc1f3bad50756f1c94bac28904" Nov 21 14:08:36 crc kubenswrapper[4675]: E1121 14:08:36.804438 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e13964836138cd4fcb8ca86a50f931c300f70dc1f3bad50756f1c94bac28904\": container with ID starting with 6e13964836138cd4fcb8ca86a50f931c300f70dc1f3bad50756f1c94bac28904 not found: ID does not exist" containerID="6e13964836138cd4fcb8ca86a50f931c300f70dc1f3bad50756f1c94bac28904" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.804456 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e13964836138cd4fcb8ca86a50f931c300f70dc1f3bad50756f1c94bac28904"} err="failed to get container status \"6e13964836138cd4fcb8ca86a50f931c300f70dc1f3bad50756f1c94bac28904\": rpc error: code = NotFound desc = could not find container \"6e13964836138cd4fcb8ca86a50f931c300f70dc1f3bad50756f1c94bac28904\": container with ID starting with 6e13964836138cd4fcb8ca86a50f931c300f70dc1f3bad50756f1c94bac28904 not found: ID does not exist" Nov 21 14:08:36 crc kubenswrapper[4675]: I1121 14:08:36.862676 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d14494f8-b884-42cb-a341-fc69969159a3" path="/var/lib/kubelet/pods/d14494f8-b884-42cb-a341-fc69969159a3/volumes" Nov 21 14:08:48 crc kubenswrapper[4675]: I1121 14:08:48.038774 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-ljvgq"] Nov 21 14:08:48 crc kubenswrapper[4675]: I1121 14:08:48.049315 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-ljvgq"] Nov 21 14:08:48 crc kubenswrapper[4675]: I1121 14:08:48.862582 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef50e12-86e6-4c25-b99e-4fc6506d3890" path="/var/lib/kubelet/pods/9ef50e12-86e6-4c25-b99e-4fc6506d3890/volumes" Nov 21 14:08:50 crc kubenswrapper[4675]: I1121 14:08:50.033633 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-w28m5"] Nov 21 14:08:50 crc kubenswrapper[4675]: I1121 14:08:50.044125 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-w28m5"] Nov 21 14:08:50 crc kubenswrapper[4675]: I1121 14:08:50.867161 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8406cb5-f871-4355-811c-7090afd8aa2e" path="/var/lib/kubelet/pods/d8406cb5-f871-4355-811c-7090afd8aa2e/volumes" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.216508 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tfhw2"] Nov 21 14:09:06 crc kubenswrapper[4675]: E1121 14:09:06.217761 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d14494f8-b884-42cb-a341-fc69969159a3" containerName="registry-server" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.217780 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d14494f8-b884-42cb-a341-fc69969159a3" containerName="registry-server" Nov 21 14:09:06 crc kubenswrapper[4675]: E1121 14:09:06.217799 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d14494f8-b884-42cb-a341-fc69969159a3" containerName="extract-utilities" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.217807 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d14494f8-b884-42cb-a341-fc69969159a3" containerName="extract-utilities" Nov 21 14:09:06 crc kubenswrapper[4675]: E1121 14:09:06.217831 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d14494f8-b884-42cb-a341-fc69969159a3" containerName="extract-content" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.217839 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d14494f8-b884-42cb-a341-fc69969159a3" containerName="extract-content" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.218157 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d14494f8-b884-42cb-a341-fc69969159a3" containerName="registry-server" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.220415 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.226807 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tfhw2"] Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.277808 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/474f2911-3753-401b-8580-f1027d5b4713-utilities\") pod \"certified-operators-tfhw2\" (UID: \"474f2911-3753-401b-8580-f1027d5b4713\") " pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.277899 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lznfk\" (UniqueName: \"kubernetes.io/projected/474f2911-3753-401b-8580-f1027d5b4713-kube-api-access-lznfk\") pod \"certified-operators-tfhw2\" (UID: \"474f2911-3753-401b-8580-f1027d5b4713\") " pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.278045 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/474f2911-3753-401b-8580-f1027d5b4713-catalog-content\") pod \"certified-operators-tfhw2\" (UID: \"474f2911-3753-401b-8580-f1027d5b4713\") " pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.381058 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/474f2911-3753-401b-8580-f1027d5b4713-catalog-content\") pod \"certified-operators-tfhw2\" (UID: \"474f2911-3753-401b-8580-f1027d5b4713\") " pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.381305 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/474f2911-3753-401b-8580-f1027d5b4713-utilities\") pod \"certified-operators-tfhw2\" (UID: \"474f2911-3753-401b-8580-f1027d5b4713\") " pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.381379 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lznfk\" (UniqueName: \"kubernetes.io/projected/474f2911-3753-401b-8580-f1027d5b4713-kube-api-access-lznfk\") pod \"certified-operators-tfhw2\" (UID: \"474f2911-3753-401b-8580-f1027d5b4713\") " pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.381809 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/474f2911-3753-401b-8580-f1027d5b4713-catalog-content\") pod \"certified-operators-tfhw2\" (UID: \"474f2911-3753-401b-8580-f1027d5b4713\") " pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.382099 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/474f2911-3753-401b-8580-f1027d5b4713-utilities\") pod \"certified-operators-tfhw2\" (UID: \"474f2911-3753-401b-8580-f1027d5b4713\") " pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.401356 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lznfk\" (UniqueName: \"kubernetes.io/projected/474f2911-3753-401b-8580-f1027d5b4713-kube-api-access-lznfk\") pod \"certified-operators-tfhw2\" (UID: \"474f2911-3753-401b-8580-f1027d5b4713\") " pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.423091 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pwjwv"] Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.426363 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.434597 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pwjwv"] Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.547794 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.585404 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w72g\" (UniqueName: \"kubernetes.io/projected/1c7213f9-3076-4d37-9803-1156edec2aaa-kube-api-access-8w72g\") pod \"community-operators-pwjwv\" (UID: \"1c7213f9-3076-4d37-9803-1156edec2aaa\") " pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.585863 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c7213f9-3076-4d37-9803-1156edec2aaa-catalog-content\") pod \"community-operators-pwjwv\" (UID: \"1c7213f9-3076-4d37-9803-1156edec2aaa\") " pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.586036 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c7213f9-3076-4d37-9803-1156edec2aaa-utilities\") pod \"community-operators-pwjwv\" (UID: \"1c7213f9-3076-4d37-9803-1156edec2aaa\") " pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.696419 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c7213f9-3076-4d37-9803-1156edec2aaa-catalog-content\") pod \"community-operators-pwjwv\" (UID: \"1c7213f9-3076-4d37-9803-1156edec2aaa\") " pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.696607 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c7213f9-3076-4d37-9803-1156edec2aaa-utilities\") pod \"community-operators-pwjwv\" (UID: \"1c7213f9-3076-4d37-9803-1156edec2aaa\") " pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.696749 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w72g\" (UniqueName: \"kubernetes.io/projected/1c7213f9-3076-4d37-9803-1156edec2aaa-kube-api-access-8w72g\") pod \"community-operators-pwjwv\" (UID: \"1c7213f9-3076-4d37-9803-1156edec2aaa\") " pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.697455 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c7213f9-3076-4d37-9803-1156edec2aaa-catalog-content\") pod \"community-operators-pwjwv\" (UID: \"1c7213f9-3076-4d37-9803-1156edec2aaa\") " pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.697698 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c7213f9-3076-4d37-9803-1156edec2aaa-utilities\") pod \"community-operators-pwjwv\" (UID: \"1c7213f9-3076-4d37-9803-1156edec2aaa\") " pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.724685 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w72g\" (UniqueName: \"kubernetes.io/projected/1c7213f9-3076-4d37-9803-1156edec2aaa-kube-api-access-8w72g\") pod \"community-operators-pwjwv\" (UID: \"1c7213f9-3076-4d37-9803-1156edec2aaa\") " pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:06 crc kubenswrapper[4675]: I1121 14:09:06.791736 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:07 crc kubenswrapper[4675]: I1121 14:09:07.169113 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tfhw2"] Nov 21 14:09:07 crc kubenswrapper[4675]: I1121 14:09:07.499440 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pwjwv"] Nov 21 14:09:08 crc kubenswrapper[4675]: I1121 14:09:08.032434 4675 generic.go:334] "Generic (PLEG): container finished" podID="474f2911-3753-401b-8580-f1027d5b4713" containerID="1ecc1425ec73a2b1a36bc6992920d5642ef154f4e2d8a7ad1a5992dc49ea327b" exitCode=0 Nov 21 14:09:08 crc kubenswrapper[4675]: I1121 14:09:08.032497 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfhw2" event={"ID":"474f2911-3753-401b-8580-f1027d5b4713","Type":"ContainerDied","Data":"1ecc1425ec73a2b1a36bc6992920d5642ef154f4e2d8a7ad1a5992dc49ea327b"} Nov 21 14:09:08 crc kubenswrapper[4675]: I1121 14:09:08.032770 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfhw2" event={"ID":"474f2911-3753-401b-8580-f1027d5b4713","Type":"ContainerStarted","Data":"86c08002f831acbd078680b3da06ad2d4c74c2318c4b682df4df77c322e35652"} Nov 21 14:09:08 crc kubenswrapper[4675]: I1121 14:09:08.035354 4675 generic.go:334] "Generic (PLEG): container finished" podID="1c7213f9-3076-4d37-9803-1156edec2aaa" containerID="1a9083bcf5bef98298174069e2682ddee1692b8bd079e414b8574e0aa4cb10dc" exitCode=0 Nov 21 14:09:08 crc kubenswrapper[4675]: I1121 14:09:08.035459 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pwjwv" event={"ID":"1c7213f9-3076-4d37-9803-1156edec2aaa","Type":"ContainerDied","Data":"1a9083bcf5bef98298174069e2682ddee1692b8bd079e414b8574e0aa4cb10dc"} Nov 21 14:09:08 crc kubenswrapper[4675]: I1121 14:09:08.035502 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pwjwv" event={"ID":"1c7213f9-3076-4d37-9803-1156edec2aaa","Type":"ContainerStarted","Data":"b6ec2390e5c3b8edf19512b12b1560547bd429b64f0314117fd0074567d45f50"} Nov 21 14:09:11 crc kubenswrapper[4675]: I1121 14:09:11.076252 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfhw2" event={"ID":"474f2911-3753-401b-8580-f1027d5b4713","Type":"ContainerStarted","Data":"f5815816075109f278c20b43f6361f7188aa2b491edd36ff307b3c09fe9f8a5a"} Nov 21 14:09:16 crc kubenswrapper[4675]: I1121 14:09:16.212247 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-896kh" podUID="afecd2d7-f280-48fd-b79e-eec3a7ee36f1" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 14:09:23 crc kubenswrapper[4675]: I1121 14:09:23.186596 4675 scope.go:117] "RemoveContainer" containerID="22ede8ee41a76ab5eec6d7f27028d967e19f6633224401e6810cb5f13928bcda" Nov 21 14:09:23 crc kubenswrapper[4675]: I1121 14:09:23.218902 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pwjwv" event={"ID":"1c7213f9-3076-4d37-9803-1156edec2aaa","Type":"ContainerStarted","Data":"199a69efa74f66d861a733b53d04d1042bfeca172256ec5c51535ea94ab63f27"} Nov 21 14:09:23 crc kubenswrapper[4675]: I1121 14:09:23.716031 4675 scope.go:117] "RemoveContainer" containerID="d87f3e031452ac6493d5a8f97e1741eb98eae6e086bc9b8294ffedbb54990093" Nov 21 14:09:23 crc kubenswrapper[4675]: I1121 14:09:23.816459 4675 scope.go:117] "RemoveContainer" containerID="2fda938354f9b7d247dba067271f91c3707a67f1ee2c440294f91b767b9b1acf" Nov 21 14:09:23 crc kubenswrapper[4675]: I1121 14:09:23.881554 4675 scope.go:117] "RemoveContainer" containerID="ed5746704b11be7c1bd3f0297c0df88a6c6f02bce1594d2897c14072fa3763fe" Nov 21 14:09:23 crc kubenswrapper[4675]: I1121 14:09:23.927089 4675 scope.go:117] "RemoveContainer" containerID="2e4a3d5ac8e49d5a151ff61fc91301ab58ef7aa6799a5793a2491c3d0dabce64" Nov 21 14:09:24 crc kubenswrapper[4675]: I1121 14:09:24.236261 4675 generic.go:334] "Generic (PLEG): container finished" podID="1c7213f9-3076-4d37-9803-1156edec2aaa" containerID="199a69efa74f66d861a733b53d04d1042bfeca172256ec5c51535ea94ab63f27" exitCode=0 Nov 21 14:09:24 crc kubenswrapper[4675]: I1121 14:09:24.236367 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pwjwv" event={"ID":"1c7213f9-3076-4d37-9803-1156edec2aaa","Type":"ContainerDied","Data":"199a69efa74f66d861a733b53d04d1042bfeca172256ec5c51535ea94ab63f27"} Nov 21 14:09:25 crc kubenswrapper[4675]: I1121 14:09:25.261637 4675 generic.go:334] "Generic (PLEG): container finished" podID="474f2911-3753-401b-8580-f1027d5b4713" containerID="f5815816075109f278c20b43f6361f7188aa2b491edd36ff307b3c09fe9f8a5a" exitCode=0 Nov 21 14:09:25 crc kubenswrapper[4675]: I1121 14:09:25.261767 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfhw2" event={"ID":"474f2911-3753-401b-8580-f1027d5b4713","Type":"ContainerDied","Data":"f5815816075109f278c20b43f6361f7188aa2b491edd36ff307b3c09fe9f8a5a"} Nov 21 14:09:26 crc kubenswrapper[4675]: I1121 14:09:26.276440 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pwjwv" event={"ID":"1c7213f9-3076-4d37-9803-1156edec2aaa","Type":"ContainerStarted","Data":"6aaa8c17b474ef300168780a3a6fb0865af089b1a44ea624f4e754baaf754422"} Nov 21 14:09:26 crc kubenswrapper[4675]: I1121 14:09:26.286476 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfhw2" event={"ID":"474f2911-3753-401b-8580-f1027d5b4713","Type":"ContainerStarted","Data":"178ef29ab8e9018f748325c2eaf903c5b00be649254b1e636763ab476ae73211"} Nov 21 14:09:26 crc kubenswrapper[4675]: I1121 14:09:26.309847 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pwjwv" podStartSLOduration=2.711910215 podStartE2EDuration="20.309824147s" podCreationTimestamp="2025-11-21 14:09:06 +0000 UTC" firstStartedPulling="2025-11-21 14:09:08.039426766 +0000 UTC m=+2224.765841493" lastFinishedPulling="2025-11-21 14:09:25.637340698 +0000 UTC m=+2242.363755425" observedRunningTime="2025-11-21 14:09:26.29730114 +0000 UTC m=+2243.023715867" watchObservedRunningTime="2025-11-21 14:09:26.309824147 +0000 UTC m=+2243.036238864" Nov 21 14:09:26 crc kubenswrapper[4675]: I1121 14:09:26.332612 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tfhw2" podStartSLOduration=2.61147929 podStartE2EDuration="20.332591344s" podCreationTimestamp="2025-11-21 14:09:06 +0000 UTC" firstStartedPulling="2025-11-21 14:09:08.035101986 +0000 UTC m=+2224.761516713" lastFinishedPulling="2025-11-21 14:09:25.75621403 +0000 UTC m=+2242.482628767" observedRunningTime="2025-11-21 14:09:26.319368039 +0000 UTC m=+2243.045782776" watchObservedRunningTime="2025-11-21 14:09:26.332591344 +0000 UTC m=+2243.059006071" Nov 21 14:09:26 crc kubenswrapper[4675]: I1121 14:09:26.548127 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:26 crc kubenswrapper[4675]: I1121 14:09:26.548196 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:26 crc kubenswrapper[4675]: I1121 14:09:26.792746 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:26 crc kubenswrapper[4675]: I1121 14:09:26.792792 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:27 crc kubenswrapper[4675]: I1121 14:09:27.601652 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tfhw2" podUID="474f2911-3753-401b-8580-f1027d5b4713" containerName="registry-server" probeResult="failure" output=< Nov 21 14:09:27 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:09:27 crc kubenswrapper[4675]: > Nov 21 14:09:27 crc kubenswrapper[4675]: I1121 14:09:27.853645 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pwjwv" podUID="1c7213f9-3076-4d37-9803-1156edec2aaa" containerName="registry-server" probeResult="failure" output=< Nov 21 14:09:27 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:09:27 crc kubenswrapper[4675]: > Nov 21 14:09:36 crc kubenswrapper[4675]: I1121 14:09:36.040927 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-wnqhq"] Nov 21 14:09:36 crc kubenswrapper[4675]: I1121 14:09:36.052907 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-wnqhq"] Nov 21 14:09:36 crc kubenswrapper[4675]: I1121 14:09:36.600327 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:36 crc kubenswrapper[4675]: I1121 14:09:36.668596 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:36 crc kubenswrapper[4675]: I1121 14:09:36.842493 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:36 crc kubenswrapper[4675]: I1121 14:09:36.866007 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68ae1285-2384-4da2-803f-9395625e88de" path="/var/lib/kubelet/pods/68ae1285-2384-4da2-803f-9395625e88de/volumes" Nov 21 14:09:36 crc kubenswrapper[4675]: I1121 14:09:36.896011 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pwjwv" Nov 21 14:09:37 crc kubenswrapper[4675]: I1121 14:09:37.817603 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tfhw2"] Nov 21 14:09:38 crc kubenswrapper[4675]: I1121 14:09:38.417589 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tfhw2" podUID="474f2911-3753-401b-8580-f1027d5b4713" containerName="registry-server" containerID="cri-o://178ef29ab8e9018f748325c2eaf903c5b00be649254b1e636763ab476ae73211" gracePeriod=2 Nov 21 14:09:38 crc kubenswrapper[4675]: I1121 14:09:38.952102 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pwjwv"] Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.222118 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mmt5k"] Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.222435 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mmt5k" podUID="1e9a8b80-1762-4847-a947-9a7d1ab21b9e" containerName="registry-server" containerID="cri-o://ea21d41fb79a4708b31fa83850a220fcdff20bd95991a425c0e22417a5d0a2e5" gracePeriod=2 Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.438134 4675 generic.go:334] "Generic (PLEG): container finished" podID="474f2911-3753-401b-8580-f1027d5b4713" containerID="178ef29ab8e9018f748325c2eaf903c5b00be649254b1e636763ab476ae73211" exitCode=0 Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.438186 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfhw2" event={"ID":"474f2911-3753-401b-8580-f1027d5b4713","Type":"ContainerDied","Data":"178ef29ab8e9018f748325c2eaf903c5b00be649254b1e636763ab476ae73211"} Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.438219 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tfhw2" event={"ID":"474f2911-3753-401b-8580-f1027d5b4713","Type":"ContainerDied","Data":"86c08002f831acbd078680b3da06ad2d4c74c2318c4b682df4df77c322e35652"} Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.438234 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86c08002f831acbd078680b3da06ad2d4c74c2318c4b682df4df77c322e35652" Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.510921 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.521354 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lznfk\" (UniqueName: \"kubernetes.io/projected/474f2911-3753-401b-8580-f1027d5b4713-kube-api-access-lznfk\") pod \"474f2911-3753-401b-8580-f1027d5b4713\" (UID: \"474f2911-3753-401b-8580-f1027d5b4713\") " Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.521545 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/474f2911-3753-401b-8580-f1027d5b4713-utilities\") pod \"474f2911-3753-401b-8580-f1027d5b4713\" (UID: \"474f2911-3753-401b-8580-f1027d5b4713\") " Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.521625 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/474f2911-3753-401b-8580-f1027d5b4713-catalog-content\") pod \"474f2911-3753-401b-8580-f1027d5b4713\" (UID: \"474f2911-3753-401b-8580-f1027d5b4713\") " Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.522660 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/474f2911-3753-401b-8580-f1027d5b4713-utilities" (OuterVolumeSpecName: "utilities") pod "474f2911-3753-401b-8580-f1027d5b4713" (UID: "474f2911-3753-401b-8580-f1027d5b4713"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.525904 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/474f2911-3753-401b-8580-f1027d5b4713-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.547425 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/474f2911-3753-401b-8580-f1027d5b4713-kube-api-access-lznfk" (OuterVolumeSpecName: "kube-api-access-lznfk") pod "474f2911-3753-401b-8580-f1027d5b4713" (UID: "474f2911-3753-401b-8580-f1027d5b4713"). InnerVolumeSpecName "kube-api-access-lznfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.589775 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/474f2911-3753-401b-8580-f1027d5b4713-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "474f2911-3753-401b-8580-f1027d5b4713" (UID: "474f2911-3753-401b-8580-f1027d5b4713"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.628828 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lznfk\" (UniqueName: \"kubernetes.io/projected/474f2911-3753-401b-8580-f1027d5b4713-kube-api-access-lznfk\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:39 crc kubenswrapper[4675]: I1121 14:09:39.628867 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/474f2911-3753-401b-8580-f1027d5b4713-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.027631 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-a3b5-account-create-d6mhd"] Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.037706 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-a3b5-account-create-d6mhd"] Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.451350 4675 generic.go:334] "Generic (PLEG): container finished" podID="1e9a8b80-1762-4847-a947-9a7d1ab21b9e" containerID="ea21d41fb79a4708b31fa83850a220fcdff20bd95991a425c0e22417a5d0a2e5" exitCode=0 Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.451453 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmt5k" event={"ID":"1e9a8b80-1762-4847-a947-9a7d1ab21b9e","Type":"ContainerDied","Data":"ea21d41fb79a4708b31fa83850a220fcdff20bd95991a425c0e22417a5d0a2e5"} Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.451708 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tfhw2" Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.513349 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tfhw2"] Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.524683 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tfhw2"] Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.664217 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmt5k" Nov 21 14:09:40 crc kubenswrapper[4675]: E1121 14:09:40.736576 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod474f2911_3753_401b_8580_f1027d5b4713.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod474f2911_3753_401b_8580_f1027d5b4713.slice/crio-86c08002f831acbd078680b3da06ad2d4c74c2318c4b682df4df77c322e35652\": RecentStats: unable to find data in memory cache]" Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.753303 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-utilities\") pod \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\" (UID: \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\") " Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.753382 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-catalog-content\") pod \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\" (UID: \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\") " Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.753446 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rn44\" (UniqueName: \"kubernetes.io/projected/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-kube-api-access-6rn44\") pod \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\" (UID: \"1e9a8b80-1762-4847-a947-9a7d1ab21b9e\") " Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.754039 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-utilities" (OuterVolumeSpecName: "utilities") pod "1e9a8b80-1762-4847-a947-9a7d1ab21b9e" (UID: "1e9a8b80-1762-4847-a947-9a7d1ab21b9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.759527 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-kube-api-access-6rn44" (OuterVolumeSpecName: "kube-api-access-6rn44") pod "1e9a8b80-1762-4847-a947-9a7d1ab21b9e" (UID: "1e9a8b80-1762-4847-a947-9a7d1ab21b9e"). InnerVolumeSpecName "kube-api-access-6rn44". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.852905 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e9a8b80-1762-4847-a947-9a7d1ab21b9e" (UID: "1e9a8b80-1762-4847-a947-9a7d1ab21b9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.855839 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.855871 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rn44\" (UniqueName: \"kubernetes.io/projected/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-kube-api-access-6rn44\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.855884 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9a8b80-1762-4847-a947-9a7d1ab21b9e-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.864082 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="474f2911-3753-401b-8580-f1027d5b4713" path="/var/lib/kubelet/pods/474f2911-3753-401b-8580-f1027d5b4713/volumes" Nov 21 14:09:40 crc kubenswrapper[4675]: I1121 14:09:40.865366 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8bf22f5-333f-43c5-9666-86ffa5657944" path="/var/lib/kubelet/pods/f8bf22f5-333f-43c5-9666-86ffa5657944/volumes" Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.044886 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8e46-account-create-956rv"] Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.054729 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-8vslv"] Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.065895 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-lxtx8"] Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.078025 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8e46-account-create-956rv"] Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.088485 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-8vslv"] Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.098327 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-lxtx8"] Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.108608 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f7f0-account-create-9jvch"] Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.118183 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f7f0-account-create-9jvch"] Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.465676 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmt5k" event={"ID":"1e9a8b80-1762-4847-a947-9a7d1ab21b9e","Type":"ContainerDied","Data":"23a92a708ac679df9efff5ea2610109822429b10c58423ef2dd777dc8e46ced5"} Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.465752 4675 scope.go:117] "RemoveContainer" containerID="ea21d41fb79a4708b31fa83850a220fcdff20bd95991a425c0e22417a5d0a2e5" Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.466233 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmt5k" Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.502499 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mmt5k"] Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.512316 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mmt5k"] Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.569101 4675 scope.go:117] "RemoveContainer" containerID="8253c8aa7978876d08c287f55eb1c853ab8851bffc79371f651378f2847b82eb" Nov 21 14:09:41 crc kubenswrapper[4675]: I1121 14:09:41.598807 4675 scope.go:117] "RemoveContainer" containerID="685ce698a5fcaa4eface688a696d0f4a90b0919fbb592ec6d561170f1f521a1a" Nov 21 14:09:42 crc kubenswrapper[4675]: I1121 14:09:42.860845 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="028aa8ea-8e30-4ec4-8280-59935c9cf343" path="/var/lib/kubelet/pods/028aa8ea-8e30-4ec4-8280-59935c9cf343/volumes" Nov 21 14:09:42 crc kubenswrapper[4675]: I1121 14:09:42.861794 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e9a8b80-1762-4847-a947-9a7d1ab21b9e" path="/var/lib/kubelet/pods/1e9a8b80-1762-4847-a947-9a7d1ab21b9e/volumes" Nov 21 14:09:42 crc kubenswrapper[4675]: I1121 14:09:42.863547 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21f160c6-8942-4e0a-bf07-6c57e7d69175" path="/var/lib/kubelet/pods/21f160c6-8942-4e0a-bf07-6c57e7d69175/volumes" Nov 21 14:09:42 crc kubenswrapper[4675]: I1121 14:09:42.864672 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ece073-0d12-40a6-a6c6-8f40cbc5268f" path="/var/lib/kubelet/pods/91ece073-0d12-40a6-a6c6-8f40cbc5268f/volumes" Nov 21 14:09:42 crc kubenswrapper[4675]: I1121 14:09:42.865258 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da354635-a1e7-4632-90d1-7d0cc2dded63" path="/var/lib/kubelet/pods/da354635-a1e7-4632-90d1-7d0cc2dded63/volumes" Nov 21 14:09:46 crc kubenswrapper[4675]: I1121 14:09:46.136672 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:09:46 crc kubenswrapper[4675]: I1121 14:09:46.137208 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:10:00 crc kubenswrapper[4675]: I1121 14:10:00.676134 4675 generic.go:334] "Generic (PLEG): container finished" podID="a3366490-72da-4662-a609-d3fd320bac49" containerID="59428c2038e32182f353fbd2e613bb055430e6584a409e25081ed36580ad71e6" exitCode=0 Nov 21 14:10:00 crc kubenswrapper[4675]: I1121 14:10:00.676288 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" event={"ID":"a3366490-72da-4662-a609-d3fd320bac49","Type":"ContainerDied","Data":"59428c2038e32182f353fbd2e613bb055430e6584a409e25081ed36580ad71e6"} Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.214399 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.321888 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3366490-72da-4662-a609-d3fd320bac49-inventory\") pod \"a3366490-72da-4662-a609-d3fd320bac49\" (UID: \"a3366490-72da-4662-a609-d3fd320bac49\") " Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.322172 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3366490-72da-4662-a609-d3fd320bac49-ssh-key\") pod \"a3366490-72da-4662-a609-d3fd320bac49\" (UID: \"a3366490-72da-4662-a609-d3fd320bac49\") " Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.322211 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqs8z\" (UniqueName: \"kubernetes.io/projected/a3366490-72da-4662-a609-d3fd320bac49-kube-api-access-mqs8z\") pod \"a3366490-72da-4662-a609-d3fd320bac49\" (UID: \"a3366490-72da-4662-a609-d3fd320bac49\") " Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.326924 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3366490-72da-4662-a609-d3fd320bac49-kube-api-access-mqs8z" (OuterVolumeSpecName: "kube-api-access-mqs8z") pod "a3366490-72da-4662-a609-d3fd320bac49" (UID: "a3366490-72da-4662-a609-d3fd320bac49"). InnerVolumeSpecName "kube-api-access-mqs8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.352898 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3366490-72da-4662-a609-d3fd320bac49-inventory" (OuterVolumeSpecName: "inventory") pod "a3366490-72da-4662-a609-d3fd320bac49" (UID: "a3366490-72da-4662-a609-d3fd320bac49"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.353767 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3366490-72da-4662-a609-d3fd320bac49-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a3366490-72da-4662-a609-d3fd320bac49" (UID: "a3366490-72da-4662-a609-d3fd320bac49"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.426385 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3366490-72da-4662-a609-d3fd320bac49-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.426415 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqs8z\" (UniqueName: \"kubernetes.io/projected/a3366490-72da-4662-a609-d3fd320bac49-kube-api-access-mqs8z\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.426426 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3366490-72da-4662-a609-d3fd320bac49-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.698907 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" event={"ID":"a3366490-72da-4662-a609-d3fd320bac49","Type":"ContainerDied","Data":"420fa4db02f58f9d14ad3f24f9e7385179787fec5be2085ca77baf1b38365adb"} Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.698952 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="420fa4db02f58f9d14ad3f24f9e7385179787fec5be2085ca77baf1b38365adb" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.699346 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.793423 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w"] Nov 21 14:10:02 crc kubenswrapper[4675]: E1121 14:10:02.794006 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e9a8b80-1762-4847-a947-9a7d1ab21b9e" containerName="extract-utilities" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.794029 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e9a8b80-1762-4847-a947-9a7d1ab21b9e" containerName="extract-utilities" Nov 21 14:10:02 crc kubenswrapper[4675]: E1121 14:10:02.794054 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474f2911-3753-401b-8580-f1027d5b4713" containerName="registry-server" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.794061 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="474f2911-3753-401b-8580-f1027d5b4713" containerName="registry-server" Nov 21 14:10:02 crc kubenswrapper[4675]: E1121 14:10:02.794240 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474f2911-3753-401b-8580-f1027d5b4713" containerName="extract-content" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.794253 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="474f2911-3753-401b-8580-f1027d5b4713" containerName="extract-content" Nov 21 14:10:02 crc kubenswrapper[4675]: E1121 14:10:02.794281 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e9a8b80-1762-4847-a947-9a7d1ab21b9e" containerName="registry-server" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.794289 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e9a8b80-1762-4847-a947-9a7d1ab21b9e" containerName="registry-server" Nov 21 14:10:02 crc kubenswrapper[4675]: E1121 14:10:02.794309 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3366490-72da-4662-a609-d3fd320bac49" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.794319 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3366490-72da-4662-a609-d3fd320bac49" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 21 14:10:02 crc kubenswrapper[4675]: E1121 14:10:02.794340 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474f2911-3753-401b-8580-f1027d5b4713" containerName="extract-utilities" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.794348 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="474f2911-3753-401b-8580-f1027d5b4713" containerName="extract-utilities" Nov 21 14:10:02 crc kubenswrapper[4675]: E1121 14:10:02.794389 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e9a8b80-1762-4847-a947-9a7d1ab21b9e" containerName="extract-content" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.794396 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e9a8b80-1762-4847-a947-9a7d1ab21b9e" containerName="extract-content" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.794677 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="474f2911-3753-401b-8580-f1027d5b4713" containerName="registry-server" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.794701 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e9a8b80-1762-4847-a947-9a7d1ab21b9e" containerName="registry-server" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.794720 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3366490-72da-4662-a609-d3fd320bac49" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.795736 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.805119 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.805795 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.809242 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.822168 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.846935 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w"] Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.941152 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tn87w\" (UID: \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.941645 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tn87w\" (UID: \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" Nov 21 14:10:02 crc kubenswrapper[4675]: I1121 14:10:02.941691 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b88cl\" (UniqueName: \"kubernetes.io/projected/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-kube-api-access-b88cl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tn87w\" (UID: \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" Nov 21 14:10:03 crc kubenswrapper[4675]: I1121 14:10:03.044019 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tn87w\" (UID: \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" Nov 21 14:10:03 crc kubenswrapper[4675]: I1121 14:10:03.044128 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b88cl\" (UniqueName: \"kubernetes.io/projected/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-kube-api-access-b88cl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tn87w\" (UID: \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" Nov 21 14:10:03 crc kubenswrapper[4675]: I1121 14:10:03.044306 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tn87w\" (UID: \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" Nov 21 14:10:03 crc kubenswrapper[4675]: I1121 14:10:03.048106 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tn87w\" (UID: \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" Nov 21 14:10:03 crc kubenswrapper[4675]: I1121 14:10:03.055533 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tn87w\" (UID: \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" Nov 21 14:10:03 crc kubenswrapper[4675]: I1121 14:10:03.060973 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b88cl\" (UniqueName: \"kubernetes.io/projected/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-kube-api-access-b88cl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tn87w\" (UID: \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" Nov 21 14:10:03 crc kubenswrapper[4675]: I1121 14:10:03.130548 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" Nov 21 14:10:03 crc kubenswrapper[4675]: I1121 14:10:03.724534 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w"] Nov 21 14:10:04 crc kubenswrapper[4675]: I1121 14:10:04.722380 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" event={"ID":"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6","Type":"ContainerStarted","Data":"c2b143945c9ec94347f2b467d48d249bbc42bc1a751fc2adacdd44695633be31"} Nov 21 14:10:04 crc kubenswrapper[4675]: I1121 14:10:04.722843 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" event={"ID":"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6","Type":"ContainerStarted","Data":"6b0f8ffa174b1182ae782a57a4bd4802d3adca53b3de0012e93780b9c80e7584"} Nov 21 14:10:04 crc kubenswrapper[4675]: I1121 14:10:04.740034 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" podStartSLOduration=2.332524801 podStartE2EDuration="2.740015095s" podCreationTimestamp="2025-11-21 14:10:02 +0000 UTC" firstStartedPulling="2025-11-21 14:10:03.713560637 +0000 UTC m=+2280.439975364" lastFinishedPulling="2025-11-21 14:10:04.121050931 +0000 UTC m=+2280.847465658" observedRunningTime="2025-11-21 14:10:04.735708036 +0000 UTC m=+2281.462122763" watchObservedRunningTime="2025-11-21 14:10:04.740015095 +0000 UTC m=+2281.466429822" Nov 21 14:10:16 crc kubenswrapper[4675]: I1121 14:10:16.136595 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:10:16 crc kubenswrapper[4675]: I1121 14:10:16.137110 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:10:18 crc kubenswrapper[4675]: I1121 14:10:18.047016 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rxmhb"] Nov 21 14:10:18 crc kubenswrapper[4675]: I1121 14:10:18.058939 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rxmhb"] Nov 21 14:10:18 crc kubenswrapper[4675]: I1121 14:10:18.861942 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c28aa6ba-e1e4-45ef-8c5e-33a3263103ff" path="/var/lib/kubelet/pods/c28aa6ba-e1e4-45ef-8c5e-33a3263103ff/volumes" Nov 21 14:10:24 crc kubenswrapper[4675]: I1121 14:10:24.773128 4675 scope.go:117] "RemoveContainer" containerID="dfa236b5688ff36bc64172535794600694e27dec11a231e8e1d8b5b889e3fa5e" Nov 21 14:10:24 crc kubenswrapper[4675]: I1121 14:10:24.811900 4675 scope.go:117] "RemoveContainer" containerID="c3bd7e7ca3cadae95e7db90ab5002fd269339adac850380576e51c757eb86d69" Nov 21 14:10:24 crc kubenswrapper[4675]: I1121 14:10:24.869139 4675 scope.go:117] "RemoveContainer" containerID="7ec7edb9dbfb5d3fb0593e4de1ec5c2cd5ec76c146fe17fe64c13abe40a3f38f" Nov 21 14:10:24 crc kubenswrapper[4675]: I1121 14:10:24.918122 4675 scope.go:117] "RemoveContainer" containerID="2da31482327ca790c9c56081aac902bdef806cce97741ba884ecf928fce6e019" Nov 21 14:10:24 crc kubenswrapper[4675]: I1121 14:10:24.968309 4675 scope.go:117] "RemoveContainer" containerID="61a97fb024614fff96db2c59d2e2bcfd35aea206500ace4ce0294a52edf17b02" Nov 21 14:10:25 crc kubenswrapper[4675]: I1121 14:10:25.024003 4675 scope.go:117] "RemoveContainer" containerID="2b3ad9a08a4336978237f32addab7202887b67bbfc8821d932ccea44b99bd1f7" Nov 21 14:10:25 crc kubenswrapper[4675]: I1121 14:10:25.077829 4675 scope.go:117] "RemoveContainer" containerID="14473afe41f334940c7118664396a74767fb83e70465bd7322ded67e030f0e98" Nov 21 14:10:31 crc kubenswrapper[4675]: I1121 14:10:31.033017 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-aaf6-account-create-4r6lg"] Nov 21 14:10:31 crc kubenswrapper[4675]: I1121 14:10:31.042784 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-aaf6-account-create-4r6lg"] Nov 21 14:10:32 crc kubenswrapper[4675]: I1121 14:10:32.040170 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-j4w5c"] Nov 21 14:10:32 crc kubenswrapper[4675]: I1121 14:10:32.052683 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-j4w5c"] Nov 21 14:10:32 crc kubenswrapper[4675]: I1121 14:10:32.866638 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb891c7f-db3b-4e11-a6dd-9bad582343a3" path="/var/lib/kubelet/pods/cb891c7f-db3b-4e11-a6dd-9bad582343a3/volumes" Nov 21 14:10:32 crc kubenswrapper[4675]: I1121 14:10:32.868261 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d21d87-934d-4af7-b8ea-f0e58faa3a5f" path="/var/lib/kubelet/pods/d8d21d87-934d-4af7-b8ea-f0e58faa3a5f/volumes" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.042393 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-6s6ct"] Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.053407 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-6s6ct"] Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.136889 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.136963 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.137019 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.137957 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.138004 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" gracePeriod=600 Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.225407 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5xlwd"] Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.228587 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.239407 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5xlwd"] Nov 21 14:10:46 crc kubenswrapper[4675]: E1121 14:10:46.274655 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.355527 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ec152a-f89b-4bc2-b364-48bf4a689479-utilities\") pod \"redhat-operators-5xlwd\" (UID: \"01ec152a-f89b-4bc2-b364-48bf4a689479\") " pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.355589 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ec152a-f89b-4bc2-b364-48bf4a689479-catalog-content\") pod \"redhat-operators-5xlwd\" (UID: \"01ec152a-f89b-4bc2-b364-48bf4a689479\") " pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.355894 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl6xg\" (UniqueName: \"kubernetes.io/projected/01ec152a-f89b-4bc2-b364-48bf4a689479-kube-api-access-cl6xg\") pod \"redhat-operators-5xlwd\" (UID: \"01ec152a-f89b-4bc2-b364-48bf4a689479\") " pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.462880 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ec152a-f89b-4bc2-b364-48bf4a689479-utilities\") pod \"redhat-operators-5xlwd\" (UID: \"01ec152a-f89b-4bc2-b364-48bf4a689479\") " pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.462964 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ec152a-f89b-4bc2-b364-48bf4a689479-catalog-content\") pod \"redhat-operators-5xlwd\" (UID: \"01ec152a-f89b-4bc2-b364-48bf4a689479\") " pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.463110 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl6xg\" (UniqueName: \"kubernetes.io/projected/01ec152a-f89b-4bc2-b364-48bf4a689479-kube-api-access-cl6xg\") pod \"redhat-operators-5xlwd\" (UID: \"01ec152a-f89b-4bc2-b364-48bf4a689479\") " pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.463658 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ec152a-f89b-4bc2-b364-48bf4a689479-catalog-content\") pod \"redhat-operators-5xlwd\" (UID: \"01ec152a-f89b-4bc2-b364-48bf4a689479\") " pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.463819 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ec152a-f89b-4bc2-b364-48bf4a689479-utilities\") pod \"redhat-operators-5xlwd\" (UID: \"01ec152a-f89b-4bc2-b364-48bf4a689479\") " pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.495697 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl6xg\" (UniqueName: \"kubernetes.io/projected/01ec152a-f89b-4bc2-b364-48bf4a689479-kube-api-access-cl6xg\") pod \"redhat-operators-5xlwd\" (UID: \"01ec152a-f89b-4bc2-b364-48bf4a689479\") " pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.551177 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:10:46 crc kubenswrapper[4675]: I1121 14:10:46.887110 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35624ff8-b298-4e69-a4d6-8dd5e3401b07" path="/var/lib/kubelet/pods/35624ff8-b298-4e69-a4d6-8dd5e3401b07/volumes" Nov 21 14:10:47 crc kubenswrapper[4675]: I1121 14:10:47.081497 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5xlwd"] Nov 21 14:10:47 crc kubenswrapper[4675]: I1121 14:10:47.163994 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5xlwd" event={"ID":"01ec152a-f89b-4bc2-b364-48bf4a689479","Type":"ContainerStarted","Data":"26d537c1eaae5dfbb0f2ed00b7f5992a34630ab5d65f2f6171cc34dabc52c785"} Nov 21 14:10:47 crc kubenswrapper[4675]: I1121 14:10:47.166644 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" exitCode=0 Nov 21 14:10:47 crc kubenswrapper[4675]: I1121 14:10:47.166689 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b"} Nov 21 14:10:47 crc kubenswrapper[4675]: I1121 14:10:47.166765 4675 scope.go:117] "RemoveContainer" containerID="e71f77ed549004b966e9029533b80c3b91f1ac795adb8367df9590ead7dca39c" Nov 21 14:10:47 crc kubenswrapper[4675]: I1121 14:10:47.168198 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:10:47 crc kubenswrapper[4675]: E1121 14:10:47.168828 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:10:48 crc kubenswrapper[4675]: I1121 14:10:48.178912 4675 generic.go:334] "Generic (PLEG): container finished" podID="01ec152a-f89b-4bc2-b364-48bf4a689479" containerID="1228b26b012ffd3ec3bee68edeef11f68bcff3efc5aa9a1f07167e54b74b015f" exitCode=0 Nov 21 14:10:48 crc kubenswrapper[4675]: I1121 14:10:48.178960 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5xlwd" event={"ID":"01ec152a-f89b-4bc2-b364-48bf4a689479","Type":"ContainerDied","Data":"1228b26b012ffd3ec3bee68edeef11f68bcff3efc5aa9a1f07167e54b74b015f"} Nov 21 14:10:50 crc kubenswrapper[4675]: I1121 14:10:50.208317 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5xlwd" event={"ID":"01ec152a-f89b-4bc2-b364-48bf4a689479","Type":"ContainerStarted","Data":"1777e0160dc6031c9789f64acbe89aad10a2498938f56d03e320131a3b6bebc5"} Nov 21 14:10:53 crc kubenswrapper[4675]: I1121 14:10:53.030123 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rl7zh"] Nov 21 14:10:53 crc kubenswrapper[4675]: I1121 14:10:53.041997 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rl7zh"] Nov 21 14:10:54 crc kubenswrapper[4675]: I1121 14:10:54.875833 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6993d84e-5485-4c53-aaa7-9ecce1b9689b" path="/var/lib/kubelet/pods/6993d84e-5485-4c53-aaa7-9ecce1b9689b/volumes" Nov 21 14:11:00 crc kubenswrapper[4675]: I1121 14:11:00.325225 4675 generic.go:334] "Generic (PLEG): container finished" podID="01ec152a-f89b-4bc2-b364-48bf4a689479" containerID="1777e0160dc6031c9789f64acbe89aad10a2498938f56d03e320131a3b6bebc5" exitCode=0 Nov 21 14:11:00 crc kubenswrapper[4675]: I1121 14:11:00.325358 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5xlwd" event={"ID":"01ec152a-f89b-4bc2-b364-48bf4a689479","Type":"ContainerDied","Data":"1777e0160dc6031c9789f64acbe89aad10a2498938f56d03e320131a3b6bebc5"} Nov 21 14:11:00 crc kubenswrapper[4675]: I1121 14:11:00.850618 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:11:00 crc kubenswrapper[4675]: E1121 14:11:00.851535 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:11:03 crc kubenswrapper[4675]: I1121 14:11:03.363400 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5xlwd" event={"ID":"01ec152a-f89b-4bc2-b364-48bf4a689479","Type":"ContainerStarted","Data":"a81f4ae4de9e75771607f34fb12f2dad74f6548f5b6fa8d081f0e007fea5cca1"} Nov 21 14:11:03 crc kubenswrapper[4675]: I1121 14:11:03.384005 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5xlwd" podStartSLOduration=2.907894731 podStartE2EDuration="17.383988138s" podCreationTimestamp="2025-11-21 14:10:46 +0000 UTC" firstStartedPulling="2025-11-21 14:10:48.181304062 +0000 UTC m=+2324.907718789" lastFinishedPulling="2025-11-21 14:11:02.657397469 +0000 UTC m=+2339.383812196" observedRunningTime="2025-11-21 14:11:03.38048034 +0000 UTC m=+2340.106895067" watchObservedRunningTime="2025-11-21 14:11:03.383988138 +0000 UTC m=+2340.110402865" Nov 21 14:11:06 crc kubenswrapper[4675]: I1121 14:11:06.551583 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:11:06 crc kubenswrapper[4675]: I1121 14:11:06.552166 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:11:07 crc kubenswrapper[4675]: I1121 14:11:07.602058 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5xlwd" podUID="01ec152a-f89b-4bc2-b364-48bf4a689479" containerName="registry-server" probeResult="failure" output=< Nov 21 14:11:07 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:11:07 crc kubenswrapper[4675]: > Nov 21 14:11:14 crc kubenswrapper[4675]: I1121 14:11:14.857378 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:11:14 crc kubenswrapper[4675]: E1121 14:11:14.858391 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:11:16 crc kubenswrapper[4675]: I1121 14:11:16.601254 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:11:16 crc kubenswrapper[4675]: I1121 14:11:16.656587 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:11:16 crc kubenswrapper[4675]: I1121 14:11:16.847662 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5xlwd"] Nov 21 14:11:18 crc kubenswrapper[4675]: I1121 14:11:18.541664 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5xlwd" podUID="01ec152a-f89b-4bc2-b364-48bf4a689479" containerName="registry-server" containerID="cri-o://a81f4ae4de9e75771607f34fb12f2dad74f6548f5b6fa8d081f0e007fea5cca1" gracePeriod=2 Nov 21 14:11:19 crc kubenswrapper[4675]: I1121 14:11:19.580999 4675 generic.go:334] "Generic (PLEG): container finished" podID="01ec152a-f89b-4bc2-b364-48bf4a689479" containerID="a81f4ae4de9e75771607f34fb12f2dad74f6548f5b6fa8d081f0e007fea5cca1" exitCode=0 Nov 21 14:11:19 crc kubenswrapper[4675]: I1121 14:11:19.581099 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5xlwd" event={"ID":"01ec152a-f89b-4bc2-b364-48bf4a689479","Type":"ContainerDied","Data":"a81f4ae4de9e75771607f34fb12f2dad74f6548f5b6fa8d081f0e007fea5cca1"} Nov 21 14:11:19 crc kubenswrapper[4675]: I1121 14:11:19.581381 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5xlwd" event={"ID":"01ec152a-f89b-4bc2-b364-48bf4a689479","Type":"ContainerDied","Data":"26d537c1eaae5dfbb0f2ed00b7f5992a34630ab5d65f2f6171cc34dabc52c785"} Nov 21 14:11:19 crc kubenswrapper[4675]: I1121 14:11:19.581402 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d537c1eaae5dfbb0f2ed00b7f5992a34630ab5d65f2f6171cc34dabc52c785" Nov 21 14:11:19 crc kubenswrapper[4675]: I1121 14:11:19.610031 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:11:19 crc kubenswrapper[4675]: I1121 14:11:19.773746 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ec152a-f89b-4bc2-b364-48bf4a689479-catalog-content\") pod \"01ec152a-f89b-4bc2-b364-48bf4a689479\" (UID: \"01ec152a-f89b-4bc2-b364-48bf4a689479\") " Nov 21 14:11:19 crc kubenswrapper[4675]: I1121 14:11:19.773901 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl6xg\" (UniqueName: \"kubernetes.io/projected/01ec152a-f89b-4bc2-b364-48bf4a689479-kube-api-access-cl6xg\") pod \"01ec152a-f89b-4bc2-b364-48bf4a689479\" (UID: \"01ec152a-f89b-4bc2-b364-48bf4a689479\") " Nov 21 14:11:19 crc kubenswrapper[4675]: I1121 14:11:19.774053 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ec152a-f89b-4bc2-b364-48bf4a689479-utilities\") pod \"01ec152a-f89b-4bc2-b364-48bf4a689479\" (UID: \"01ec152a-f89b-4bc2-b364-48bf4a689479\") " Nov 21 14:11:19 crc kubenswrapper[4675]: I1121 14:11:19.775146 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01ec152a-f89b-4bc2-b364-48bf4a689479-utilities" (OuterVolumeSpecName: "utilities") pod "01ec152a-f89b-4bc2-b364-48bf4a689479" (UID: "01ec152a-f89b-4bc2-b364-48bf4a689479"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:11:19 crc kubenswrapper[4675]: I1121 14:11:19.780755 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ec152a-f89b-4bc2-b364-48bf4a689479-kube-api-access-cl6xg" (OuterVolumeSpecName: "kube-api-access-cl6xg") pod "01ec152a-f89b-4bc2-b364-48bf4a689479" (UID: "01ec152a-f89b-4bc2-b364-48bf4a689479"). InnerVolumeSpecName "kube-api-access-cl6xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:11:19 crc kubenswrapper[4675]: I1121 14:11:19.873981 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01ec152a-f89b-4bc2-b364-48bf4a689479-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01ec152a-f89b-4bc2-b364-48bf4a689479" (UID: "01ec152a-f89b-4bc2-b364-48bf4a689479"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:11:19 crc kubenswrapper[4675]: I1121 14:11:19.878137 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01ec152a-f89b-4bc2-b364-48bf4a689479-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:11:19 crc kubenswrapper[4675]: I1121 14:11:19.878167 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl6xg\" (UniqueName: \"kubernetes.io/projected/01ec152a-f89b-4bc2-b364-48bf4a689479-kube-api-access-cl6xg\") on node \"crc\" DevicePath \"\"" Nov 21 14:11:19 crc kubenswrapper[4675]: I1121 14:11:19.878179 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01ec152a-f89b-4bc2-b364-48bf4a689479-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:11:20 crc kubenswrapper[4675]: I1121 14:11:20.592818 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5xlwd" Nov 21 14:11:20 crc kubenswrapper[4675]: I1121 14:11:20.645187 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5xlwd"] Nov 21 14:11:20 crc kubenswrapper[4675]: I1121 14:11:20.655495 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5xlwd"] Nov 21 14:11:20 crc kubenswrapper[4675]: I1121 14:11:20.865217 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ec152a-f89b-4bc2-b364-48bf4a689479" path="/var/lib/kubelet/pods/01ec152a-f89b-4bc2-b364-48bf4a689479/volumes" Nov 21 14:11:23 crc kubenswrapper[4675]: I1121 14:11:23.633250 4675 generic.go:334] "Generic (PLEG): container finished" podID="7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6" containerID="c2b143945c9ec94347f2b467d48d249bbc42bc1a751fc2adacdd44695633be31" exitCode=0 Nov 21 14:11:23 crc kubenswrapper[4675]: I1121 14:11:23.633337 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" event={"ID":"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6","Type":"ContainerDied","Data":"c2b143945c9ec94347f2b467d48d249bbc42bc1a751fc2adacdd44695633be31"} Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.120944 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.277568 4675 scope.go:117] "RemoveContainer" containerID="e7ab965dc6e4cc7eed69ca6e5d6ccd026fb5d349195679044f0fe3ddceef504c" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.314234 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-ssh-key\") pod \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\" (UID: \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\") " Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.314389 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b88cl\" (UniqueName: \"kubernetes.io/projected/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-kube-api-access-b88cl\") pod \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\" (UID: \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\") " Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.314477 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-inventory\") pod \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\" (UID: \"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6\") " Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.315994 4675 scope.go:117] "RemoveContainer" containerID="c441edb88f548b85052c4e3f5fd231272728ce23736d5fe0ce386c13e2538806" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.320866 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-kube-api-access-b88cl" (OuterVolumeSpecName: "kube-api-access-b88cl") pod "7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6" (UID: "7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6"). InnerVolumeSpecName "kube-api-access-b88cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.349985 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6" (UID: "7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.358421 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-inventory" (OuterVolumeSpecName: "inventory") pod "7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6" (UID: "7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.418461 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.418497 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b88cl\" (UniqueName: \"kubernetes.io/projected/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-kube-api-access-b88cl\") on node \"crc\" DevicePath \"\"" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.418510 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.470078 4675 scope.go:117] "RemoveContainer" containerID="bed4fe05f9df976bb44cc9146c6258506584d744d119239593a94078c90d369c" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.492590 4675 scope.go:117] "RemoveContainer" containerID="709a48b9984132ff6ea4d89e11100eac523f02c55b9c3fb72551a987728dba9f" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.654479 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" event={"ID":"7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6","Type":"ContainerDied","Data":"6b0f8ffa174b1182ae782a57a4bd4802d3adca53b3de0012e93780b9c80e7584"} Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.654793 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b0f8ffa174b1182ae782a57a4bd4802d3adca53b3de0012e93780b9c80e7584" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.654563 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tn87w" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.739372 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg"] Nov 21 14:11:25 crc kubenswrapper[4675]: E1121 14:11:25.739996 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ec152a-f89b-4bc2-b364-48bf4a689479" containerName="extract-utilities" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.740019 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ec152a-f89b-4bc2-b364-48bf4a689479" containerName="extract-utilities" Nov 21 14:11:25 crc kubenswrapper[4675]: E1121 14:11:25.740040 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ec152a-f89b-4bc2-b364-48bf4a689479" containerName="registry-server" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.740048 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ec152a-f89b-4bc2-b364-48bf4a689479" containerName="registry-server" Nov 21 14:11:25 crc kubenswrapper[4675]: E1121 14:11:25.740140 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.740151 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:11:25 crc kubenswrapper[4675]: E1121 14:11:25.740169 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ec152a-f89b-4bc2-b364-48bf4a689479" containerName="extract-content" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.740176 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ec152a-f89b-4bc2-b364-48bf4a689479" containerName="extract-content" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.740426 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ec152a-f89b-4bc2-b364-48bf4a689479" containerName="registry-server" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.740463 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.741448 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.743297 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.743517 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.743716 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.744082 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.765354 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg"] Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.848827 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:11:25 crc kubenswrapper[4675]: E1121 14:11:25.849534 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.929321 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/045679ad-48d6-48ed-a9a5-8699cc283733-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t67kg\" (UID: \"045679ad-48d6-48ed-a9a5-8699cc283733\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.929423 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzl6z\" (UniqueName: \"kubernetes.io/projected/045679ad-48d6-48ed-a9a5-8699cc283733-kube-api-access-gzl6z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t67kg\" (UID: \"045679ad-48d6-48ed-a9a5-8699cc283733\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" Nov 21 14:11:25 crc kubenswrapper[4675]: I1121 14:11:25.930479 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/045679ad-48d6-48ed-a9a5-8699cc283733-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t67kg\" (UID: \"045679ad-48d6-48ed-a9a5-8699cc283733\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" Nov 21 14:11:26 crc kubenswrapper[4675]: I1121 14:11:26.033057 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/045679ad-48d6-48ed-a9a5-8699cc283733-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t67kg\" (UID: \"045679ad-48d6-48ed-a9a5-8699cc283733\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" Nov 21 14:11:26 crc kubenswrapper[4675]: I1121 14:11:26.033322 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/045679ad-48d6-48ed-a9a5-8699cc283733-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t67kg\" (UID: \"045679ad-48d6-48ed-a9a5-8699cc283733\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" Nov 21 14:11:26 crc kubenswrapper[4675]: I1121 14:11:26.033451 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzl6z\" (UniqueName: \"kubernetes.io/projected/045679ad-48d6-48ed-a9a5-8699cc283733-kube-api-access-gzl6z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t67kg\" (UID: \"045679ad-48d6-48ed-a9a5-8699cc283733\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" Nov 21 14:11:26 crc kubenswrapper[4675]: I1121 14:11:26.038736 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/045679ad-48d6-48ed-a9a5-8699cc283733-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t67kg\" (UID: \"045679ad-48d6-48ed-a9a5-8699cc283733\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" Nov 21 14:11:26 crc kubenswrapper[4675]: I1121 14:11:26.046854 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/045679ad-48d6-48ed-a9a5-8699cc283733-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t67kg\" (UID: \"045679ad-48d6-48ed-a9a5-8699cc283733\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" Nov 21 14:11:26 crc kubenswrapper[4675]: I1121 14:11:26.058885 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzl6z\" (UniqueName: \"kubernetes.io/projected/045679ad-48d6-48ed-a9a5-8699cc283733-kube-api-access-gzl6z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-t67kg\" (UID: \"045679ad-48d6-48ed-a9a5-8699cc283733\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" Nov 21 14:11:26 crc kubenswrapper[4675]: I1121 14:11:26.060822 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" Nov 21 14:11:26 crc kubenswrapper[4675]: I1121 14:11:26.593335 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg"] Nov 21 14:11:26 crc kubenswrapper[4675]: I1121 14:11:26.666442 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" event={"ID":"045679ad-48d6-48ed-a9a5-8699cc283733","Type":"ContainerStarted","Data":"6a339da1ad0bae82c02b8bb52cbe9b26e68e3b951d145a2c33eba703d51818e0"} Nov 21 14:11:27 crc kubenswrapper[4675]: I1121 14:11:27.676813 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" event={"ID":"045679ad-48d6-48ed-a9a5-8699cc283733","Type":"ContainerStarted","Data":"59eb6fa07eabfd55ea9e38bacb761f5d84129f538d79e1c425b292f068421add"} Nov 21 14:11:27 crc kubenswrapper[4675]: I1121 14:11:27.702556 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" podStartSLOduration=2.283018444 podStartE2EDuration="2.702535761s" podCreationTimestamp="2025-11-21 14:11:25 +0000 UTC" firstStartedPulling="2025-11-21 14:11:26.605703637 +0000 UTC m=+2363.332118364" lastFinishedPulling="2025-11-21 14:11:27.025220954 +0000 UTC m=+2363.751635681" observedRunningTime="2025-11-21 14:11:27.695918844 +0000 UTC m=+2364.422333581" watchObservedRunningTime="2025-11-21 14:11:27.702535761 +0000 UTC m=+2364.428950488" Nov 21 14:11:29 crc kubenswrapper[4675]: I1121 14:11:29.041792 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bxrvb"] Nov 21 14:11:29 crc kubenswrapper[4675]: I1121 14:11:29.051840 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bxrvb"] Nov 21 14:11:30 crc kubenswrapper[4675]: I1121 14:11:30.861370 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018c03cb-3abb-4ad7-b496-fb3d440d3ec3" path="/var/lib/kubelet/pods/018c03cb-3abb-4ad7-b496-fb3d440d3ec3/volumes" Nov 21 14:11:31 crc kubenswrapper[4675]: I1121 14:11:31.718544 4675 generic.go:334] "Generic (PLEG): container finished" podID="045679ad-48d6-48ed-a9a5-8699cc283733" containerID="59eb6fa07eabfd55ea9e38bacb761f5d84129f538d79e1c425b292f068421add" exitCode=0 Nov 21 14:11:31 crc kubenswrapper[4675]: I1121 14:11:31.718596 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" event={"ID":"045679ad-48d6-48ed-a9a5-8699cc283733","Type":"ContainerDied","Data":"59eb6fa07eabfd55ea9e38bacb761f5d84129f538d79e1c425b292f068421add"} Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.194480 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.305454 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/045679ad-48d6-48ed-a9a5-8699cc283733-ssh-key\") pod \"045679ad-48d6-48ed-a9a5-8699cc283733\" (UID: \"045679ad-48d6-48ed-a9a5-8699cc283733\") " Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.305509 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzl6z\" (UniqueName: \"kubernetes.io/projected/045679ad-48d6-48ed-a9a5-8699cc283733-kube-api-access-gzl6z\") pod \"045679ad-48d6-48ed-a9a5-8699cc283733\" (UID: \"045679ad-48d6-48ed-a9a5-8699cc283733\") " Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.305680 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/045679ad-48d6-48ed-a9a5-8699cc283733-inventory\") pod \"045679ad-48d6-48ed-a9a5-8699cc283733\" (UID: \"045679ad-48d6-48ed-a9a5-8699cc283733\") " Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.311152 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/045679ad-48d6-48ed-a9a5-8699cc283733-kube-api-access-gzl6z" (OuterVolumeSpecName: "kube-api-access-gzl6z") pod "045679ad-48d6-48ed-a9a5-8699cc283733" (UID: "045679ad-48d6-48ed-a9a5-8699cc283733"). InnerVolumeSpecName "kube-api-access-gzl6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.335853 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/045679ad-48d6-48ed-a9a5-8699cc283733-inventory" (OuterVolumeSpecName: "inventory") pod "045679ad-48d6-48ed-a9a5-8699cc283733" (UID: "045679ad-48d6-48ed-a9a5-8699cc283733"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.338087 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/045679ad-48d6-48ed-a9a5-8699cc283733-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "045679ad-48d6-48ed-a9a5-8699cc283733" (UID: "045679ad-48d6-48ed-a9a5-8699cc283733"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.408300 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/045679ad-48d6-48ed-a9a5-8699cc283733-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.408332 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzl6z\" (UniqueName: \"kubernetes.io/projected/045679ad-48d6-48ed-a9a5-8699cc283733-kube-api-access-gzl6z\") on node \"crc\" DevicePath \"\"" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.408349 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/045679ad-48d6-48ed-a9a5-8699cc283733-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.740637 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" event={"ID":"045679ad-48d6-48ed-a9a5-8699cc283733","Type":"ContainerDied","Data":"6a339da1ad0bae82c02b8bb52cbe9b26e68e3b951d145a2c33eba703d51818e0"} Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.740709 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a339da1ad0bae82c02b8bb52cbe9b26e68e3b951d145a2c33eba703d51818e0" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.740730 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-t67kg" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.809969 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2"] Nov 21 14:11:33 crc kubenswrapper[4675]: E1121 14:11:33.810578 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="045679ad-48d6-48ed-a9a5-8699cc283733" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.810619 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="045679ad-48d6-48ed-a9a5-8699cc283733" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.810931 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="045679ad-48d6-48ed-a9a5-8699cc283733" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.812746 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.816734 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.816734 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.816749 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.816794 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.819816 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2"] Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.921744 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec8162e7-cc12-48eb-982d-036b866eaeb0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8tsm2\" (UID: \"ec8162e7-cc12-48eb-982d-036b866eaeb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.921916 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p67dk\" (UniqueName: \"kubernetes.io/projected/ec8162e7-cc12-48eb-982d-036b866eaeb0-kube-api-access-p67dk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8tsm2\" (UID: \"ec8162e7-cc12-48eb-982d-036b866eaeb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" Nov 21 14:11:33 crc kubenswrapper[4675]: I1121 14:11:33.922180 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec8162e7-cc12-48eb-982d-036b866eaeb0-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8tsm2\" (UID: \"ec8162e7-cc12-48eb-982d-036b866eaeb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" Nov 21 14:11:34 crc kubenswrapper[4675]: I1121 14:11:34.024255 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p67dk\" (UniqueName: \"kubernetes.io/projected/ec8162e7-cc12-48eb-982d-036b866eaeb0-kube-api-access-p67dk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8tsm2\" (UID: \"ec8162e7-cc12-48eb-982d-036b866eaeb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" Nov 21 14:11:34 crc kubenswrapper[4675]: I1121 14:11:34.024372 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec8162e7-cc12-48eb-982d-036b866eaeb0-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8tsm2\" (UID: \"ec8162e7-cc12-48eb-982d-036b866eaeb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" Nov 21 14:11:34 crc kubenswrapper[4675]: I1121 14:11:34.024493 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec8162e7-cc12-48eb-982d-036b866eaeb0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8tsm2\" (UID: \"ec8162e7-cc12-48eb-982d-036b866eaeb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" Nov 21 14:11:34 crc kubenswrapper[4675]: I1121 14:11:34.028353 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec8162e7-cc12-48eb-982d-036b866eaeb0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8tsm2\" (UID: \"ec8162e7-cc12-48eb-982d-036b866eaeb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" Nov 21 14:11:34 crc kubenswrapper[4675]: I1121 14:11:34.028934 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec8162e7-cc12-48eb-982d-036b866eaeb0-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8tsm2\" (UID: \"ec8162e7-cc12-48eb-982d-036b866eaeb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" Nov 21 14:11:34 crc kubenswrapper[4675]: I1121 14:11:34.041323 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p67dk\" (UniqueName: \"kubernetes.io/projected/ec8162e7-cc12-48eb-982d-036b866eaeb0-kube-api-access-p67dk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8tsm2\" (UID: \"ec8162e7-cc12-48eb-982d-036b866eaeb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" Nov 21 14:11:34 crc kubenswrapper[4675]: I1121 14:11:34.136691 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" Nov 21 14:11:34 crc kubenswrapper[4675]: I1121 14:11:34.737113 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2"] Nov 21 14:11:34 crc kubenswrapper[4675]: I1121 14:11:34.755229 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" event={"ID":"ec8162e7-cc12-48eb-982d-036b866eaeb0","Type":"ContainerStarted","Data":"51548f53cecfe10ae422220a33bf58c727c5762364698ec5c897daf875bcb7e4"} Nov 21 14:11:35 crc kubenswrapper[4675]: I1121 14:11:35.769058 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" event={"ID":"ec8162e7-cc12-48eb-982d-036b866eaeb0","Type":"ContainerStarted","Data":"25d3a0eaa40bcb0ddd09ca2d470aeafcb42c00c4e76cf517dc2a9f91461f2b85"} Nov 21 14:11:35 crc kubenswrapper[4675]: I1121 14:11:35.794551 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" podStartSLOduration=2.337899428 podStartE2EDuration="2.794529171s" podCreationTimestamp="2025-11-21 14:11:33 +0000 UTC" firstStartedPulling="2025-11-21 14:11:34.738339781 +0000 UTC m=+2371.464754508" lastFinishedPulling="2025-11-21 14:11:35.194969514 +0000 UTC m=+2371.921384251" observedRunningTime="2025-11-21 14:11:35.782136429 +0000 UTC m=+2372.508551156" watchObservedRunningTime="2025-11-21 14:11:35.794529171 +0000 UTC m=+2372.520943908" Nov 21 14:11:37 crc kubenswrapper[4675]: I1121 14:11:37.849177 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:11:37 crc kubenswrapper[4675]: E1121 14:11:37.851027 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:11:51 crc kubenswrapper[4675]: I1121 14:11:51.850291 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:11:51 crc kubenswrapper[4675]: E1121 14:11:51.852582 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:12:04 crc kubenswrapper[4675]: I1121 14:12:04.873104 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:12:04 crc kubenswrapper[4675]: E1121 14:12:04.874403 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:12:11 crc kubenswrapper[4675]: I1121 14:12:11.210290 4675 generic.go:334] "Generic (PLEG): container finished" podID="ec8162e7-cc12-48eb-982d-036b866eaeb0" containerID="25d3a0eaa40bcb0ddd09ca2d470aeafcb42c00c4e76cf517dc2a9f91461f2b85" exitCode=0 Nov 21 14:12:11 crc kubenswrapper[4675]: I1121 14:12:11.210393 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" event={"ID":"ec8162e7-cc12-48eb-982d-036b866eaeb0","Type":"ContainerDied","Data":"25d3a0eaa40bcb0ddd09ca2d470aeafcb42c00c4e76cf517dc2a9f91461f2b85"} Nov 21 14:12:12 crc kubenswrapper[4675]: I1121 14:12:12.680601 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" Nov 21 14:12:12 crc kubenswrapper[4675]: I1121 14:12:12.780637 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec8162e7-cc12-48eb-982d-036b866eaeb0-ssh-key\") pod \"ec8162e7-cc12-48eb-982d-036b866eaeb0\" (UID: \"ec8162e7-cc12-48eb-982d-036b866eaeb0\") " Nov 21 14:12:12 crc kubenswrapper[4675]: I1121 14:12:12.780685 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec8162e7-cc12-48eb-982d-036b866eaeb0-inventory\") pod \"ec8162e7-cc12-48eb-982d-036b866eaeb0\" (UID: \"ec8162e7-cc12-48eb-982d-036b866eaeb0\") " Nov 21 14:12:12 crc kubenswrapper[4675]: I1121 14:12:12.780901 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p67dk\" (UniqueName: \"kubernetes.io/projected/ec8162e7-cc12-48eb-982d-036b866eaeb0-kube-api-access-p67dk\") pod \"ec8162e7-cc12-48eb-982d-036b866eaeb0\" (UID: \"ec8162e7-cc12-48eb-982d-036b866eaeb0\") " Nov 21 14:12:12 crc kubenswrapper[4675]: I1121 14:12:12.786762 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8162e7-cc12-48eb-982d-036b866eaeb0-kube-api-access-p67dk" (OuterVolumeSpecName: "kube-api-access-p67dk") pod "ec8162e7-cc12-48eb-982d-036b866eaeb0" (UID: "ec8162e7-cc12-48eb-982d-036b866eaeb0"). InnerVolumeSpecName "kube-api-access-p67dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:12:12 crc kubenswrapper[4675]: I1121 14:12:12.813104 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec8162e7-cc12-48eb-982d-036b866eaeb0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ec8162e7-cc12-48eb-982d-036b866eaeb0" (UID: "ec8162e7-cc12-48eb-982d-036b866eaeb0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:12:12 crc kubenswrapper[4675]: I1121 14:12:12.813530 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec8162e7-cc12-48eb-982d-036b866eaeb0-inventory" (OuterVolumeSpecName: "inventory") pod "ec8162e7-cc12-48eb-982d-036b866eaeb0" (UID: "ec8162e7-cc12-48eb-982d-036b866eaeb0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:12:12 crc kubenswrapper[4675]: I1121 14:12:12.885557 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p67dk\" (UniqueName: \"kubernetes.io/projected/ec8162e7-cc12-48eb-982d-036b866eaeb0-kube-api-access-p67dk\") on node \"crc\" DevicePath \"\"" Nov 21 14:12:12 crc kubenswrapper[4675]: I1121 14:12:12.885603 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ec8162e7-cc12-48eb-982d-036b866eaeb0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:12:12 crc kubenswrapper[4675]: I1121 14:12:12.885619 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec8162e7-cc12-48eb-982d-036b866eaeb0-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.234526 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" event={"ID":"ec8162e7-cc12-48eb-982d-036b866eaeb0","Type":"ContainerDied","Data":"51548f53cecfe10ae422220a33bf58c727c5762364698ec5c897daf875bcb7e4"} Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.234571 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51548f53cecfe10ae422220a33bf58c727c5762364698ec5c897daf875bcb7e4" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.234602 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8tsm2" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.336105 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm"] Nov 21 14:12:13 crc kubenswrapper[4675]: E1121 14:12:13.336897 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8162e7-cc12-48eb-982d-036b866eaeb0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.336915 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8162e7-cc12-48eb-982d-036b866eaeb0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.337144 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec8162e7-cc12-48eb-982d-036b866eaeb0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.337877 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.339620 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.339959 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.342793 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.345715 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.361747 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm"] Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.403399 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzm48\" (UniqueName: \"kubernetes.io/projected/e542a2fc-0fd2-49fa-873e-1d580edd93d4-kube-api-access-qzm48\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm\" (UID: \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.403686 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e542a2fc-0fd2-49fa-873e-1d580edd93d4-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm\" (UID: \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.403999 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e542a2fc-0fd2-49fa-873e-1d580edd93d4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm\" (UID: \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.505748 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e542a2fc-0fd2-49fa-873e-1d580edd93d4-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm\" (UID: \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.505927 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e542a2fc-0fd2-49fa-873e-1d580edd93d4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm\" (UID: \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.506129 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzm48\" (UniqueName: \"kubernetes.io/projected/e542a2fc-0fd2-49fa-873e-1d580edd93d4-kube-api-access-qzm48\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm\" (UID: \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.510298 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e542a2fc-0fd2-49fa-873e-1d580edd93d4-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm\" (UID: \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.510463 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e542a2fc-0fd2-49fa-873e-1d580edd93d4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm\" (UID: \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.525675 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzm48\" (UniqueName: \"kubernetes.io/projected/e542a2fc-0fd2-49fa-873e-1d580edd93d4-kube-api-access-qzm48\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm\" (UID: \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" Nov 21 14:12:13 crc kubenswrapper[4675]: I1121 14:12:13.706977 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" Nov 21 14:12:14 crc kubenswrapper[4675]: I1121 14:12:14.237964 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm"] Nov 21 14:12:14 crc kubenswrapper[4675]: I1121 14:12:14.245813 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" event={"ID":"e542a2fc-0fd2-49fa-873e-1d580edd93d4","Type":"ContainerStarted","Data":"8ee92c27eabe2cd0e75bd30f3cb62fcabd627b3d502b4d539268b493d0d9d0be"} Nov 21 14:12:15 crc kubenswrapper[4675]: I1121 14:12:15.258326 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" event={"ID":"e542a2fc-0fd2-49fa-873e-1d580edd93d4","Type":"ContainerStarted","Data":"414486fbb301a330550e1bd17870331d09719fc4614481c16592a351250d20c5"} Nov 21 14:12:15 crc kubenswrapper[4675]: I1121 14:12:15.286621 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" podStartSLOduration=1.894726966 podStartE2EDuration="2.286605016s" podCreationTimestamp="2025-11-21 14:12:13 +0000 UTC" firstStartedPulling="2025-11-21 14:12:14.234019367 +0000 UTC m=+2410.960434094" lastFinishedPulling="2025-11-21 14:12:14.625897417 +0000 UTC m=+2411.352312144" observedRunningTime="2025-11-21 14:12:15.276910311 +0000 UTC m=+2412.003325058" watchObservedRunningTime="2025-11-21 14:12:15.286605016 +0000 UTC m=+2412.013019743" Nov 21 14:12:16 crc kubenswrapper[4675]: I1121 14:12:16.848903 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:12:16 crc kubenswrapper[4675]: E1121 14:12:16.849468 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:12:25 crc kubenswrapper[4675]: I1121 14:12:25.606527 4675 scope.go:117] "RemoveContainer" containerID="74da10aae6a67f6c1367017d6d2f1b44d9464b443b948804115e87fcfd349340" Nov 21 14:12:30 crc kubenswrapper[4675]: I1121 14:12:30.849406 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:12:30 crc kubenswrapper[4675]: E1121 14:12:30.850319 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:12:42 crc kubenswrapper[4675]: I1121 14:12:42.849575 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:12:42 crc kubenswrapper[4675]: E1121 14:12:42.850449 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:12:55 crc kubenswrapper[4675]: I1121 14:12:55.848836 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:12:55 crc kubenswrapper[4675]: E1121 14:12:55.849712 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:13:03 crc kubenswrapper[4675]: I1121 14:13:03.803725 4675 generic.go:334] "Generic (PLEG): container finished" podID="e542a2fc-0fd2-49fa-873e-1d580edd93d4" containerID="414486fbb301a330550e1bd17870331d09719fc4614481c16592a351250d20c5" exitCode=0 Nov 21 14:13:03 crc kubenswrapper[4675]: I1121 14:13:03.803812 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" event={"ID":"e542a2fc-0fd2-49fa-873e-1d580edd93d4","Type":"ContainerDied","Data":"414486fbb301a330550e1bd17870331d09719fc4614481c16592a351250d20c5"} Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.269684 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.380787 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzm48\" (UniqueName: \"kubernetes.io/projected/e542a2fc-0fd2-49fa-873e-1d580edd93d4-kube-api-access-qzm48\") pod \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\" (UID: \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\") " Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.380983 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e542a2fc-0fd2-49fa-873e-1d580edd93d4-ssh-key\") pod \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\" (UID: \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\") " Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.381006 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e542a2fc-0fd2-49fa-873e-1d580edd93d4-inventory\") pod \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\" (UID: \"e542a2fc-0fd2-49fa-873e-1d580edd93d4\") " Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.386663 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e542a2fc-0fd2-49fa-873e-1d580edd93d4-kube-api-access-qzm48" (OuterVolumeSpecName: "kube-api-access-qzm48") pod "e542a2fc-0fd2-49fa-873e-1d580edd93d4" (UID: "e542a2fc-0fd2-49fa-873e-1d580edd93d4"). InnerVolumeSpecName "kube-api-access-qzm48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.416902 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e542a2fc-0fd2-49fa-873e-1d580edd93d4-inventory" (OuterVolumeSpecName: "inventory") pod "e542a2fc-0fd2-49fa-873e-1d580edd93d4" (UID: "e542a2fc-0fd2-49fa-873e-1d580edd93d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.422127 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e542a2fc-0fd2-49fa-873e-1d580edd93d4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e542a2fc-0fd2-49fa-873e-1d580edd93d4" (UID: "e542a2fc-0fd2-49fa-873e-1d580edd93d4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.484284 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzm48\" (UniqueName: \"kubernetes.io/projected/e542a2fc-0fd2-49fa-873e-1d580edd93d4-kube-api-access-qzm48\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.484332 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e542a2fc-0fd2-49fa-873e-1d580edd93d4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.484347 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e542a2fc-0fd2-49fa-873e-1d580edd93d4-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.828638 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" event={"ID":"e542a2fc-0fd2-49fa-873e-1d580edd93d4","Type":"ContainerDied","Data":"8ee92c27eabe2cd0e75bd30f3cb62fcabd627b3d502b4d539268b493d0d9d0be"} Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.828979 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ee92c27eabe2cd0e75bd30f3cb62fcabd627b3d502b4d539268b493d0d9d0be" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.828712 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.919525 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-dpdgb"] Nov 21 14:13:05 crc kubenswrapper[4675]: E1121 14:13:05.920142 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e542a2fc-0fd2-49fa-873e-1d580edd93d4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.920164 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e542a2fc-0fd2-49fa-873e-1d580edd93d4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.920473 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e542a2fc-0fd2-49fa-873e-1d580edd93d4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.921474 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.926347 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.926393 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.926526 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.930580 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.934950 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-dpdgb"] Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.997857 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5w6h\" (UniqueName: \"kubernetes.io/projected/7c34957b-d4df-4448-9396-9e7244dc85b5-kube-api-access-s5w6h\") pod \"ssh-known-hosts-edpm-deployment-dpdgb\" (UID: \"7c34957b-d4df-4448-9396-9e7244dc85b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.997992 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7c34957b-d4df-4448-9396-9e7244dc85b5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-dpdgb\" (UID: \"7c34957b-d4df-4448-9396-9e7244dc85b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" Nov 21 14:13:05 crc kubenswrapper[4675]: I1121 14:13:05.998046 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c34957b-d4df-4448-9396-9e7244dc85b5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-dpdgb\" (UID: \"7c34957b-d4df-4448-9396-9e7244dc85b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" Nov 21 14:13:06 crc kubenswrapper[4675]: I1121 14:13:06.100577 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5w6h\" (UniqueName: \"kubernetes.io/projected/7c34957b-d4df-4448-9396-9e7244dc85b5-kube-api-access-s5w6h\") pod \"ssh-known-hosts-edpm-deployment-dpdgb\" (UID: \"7c34957b-d4df-4448-9396-9e7244dc85b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" Nov 21 14:13:06 crc kubenswrapper[4675]: I1121 14:13:06.100733 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7c34957b-d4df-4448-9396-9e7244dc85b5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-dpdgb\" (UID: \"7c34957b-d4df-4448-9396-9e7244dc85b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" Nov 21 14:13:06 crc kubenswrapper[4675]: I1121 14:13:06.100815 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c34957b-d4df-4448-9396-9e7244dc85b5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-dpdgb\" (UID: \"7c34957b-d4df-4448-9396-9e7244dc85b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" Nov 21 14:13:06 crc kubenswrapper[4675]: I1121 14:13:06.105734 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7c34957b-d4df-4448-9396-9e7244dc85b5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-dpdgb\" (UID: \"7c34957b-d4df-4448-9396-9e7244dc85b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" Nov 21 14:13:06 crc kubenswrapper[4675]: I1121 14:13:06.110888 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c34957b-d4df-4448-9396-9e7244dc85b5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-dpdgb\" (UID: \"7c34957b-d4df-4448-9396-9e7244dc85b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" Nov 21 14:13:06 crc kubenswrapper[4675]: I1121 14:13:06.120632 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5w6h\" (UniqueName: \"kubernetes.io/projected/7c34957b-d4df-4448-9396-9e7244dc85b5-kube-api-access-s5w6h\") pod \"ssh-known-hosts-edpm-deployment-dpdgb\" (UID: \"7c34957b-d4df-4448-9396-9e7244dc85b5\") " pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" Nov 21 14:13:06 crc kubenswrapper[4675]: I1121 14:13:06.243000 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" Nov 21 14:13:06 crc kubenswrapper[4675]: I1121 14:13:06.797607 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:13:06 crc kubenswrapper[4675]: I1121 14:13:06.804442 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-dpdgb"] Nov 21 14:13:06 crc kubenswrapper[4675]: I1121 14:13:06.846493 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" event={"ID":"7c34957b-d4df-4448-9396-9e7244dc85b5","Type":"ContainerStarted","Data":"9633ae300e0a4ae9286c4afc349bc744332e403cfcb22ea5c51ce99223b226ea"} Nov 21 14:13:06 crc kubenswrapper[4675]: I1121 14:13:06.849052 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:13:06 crc kubenswrapper[4675]: E1121 14:13:06.849467 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:13:07 crc kubenswrapper[4675]: I1121 14:13:07.858814 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" event={"ID":"7c34957b-d4df-4448-9396-9e7244dc85b5","Type":"ContainerStarted","Data":"8a6d0334aab4f7d7455b3a7bb4c889ac3fdb30764312dd91bafdc6f33fd36fb7"} Nov 21 14:13:07 crc kubenswrapper[4675]: I1121 14:13:07.885510 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" podStartSLOduration=2.452495819 podStartE2EDuration="2.885490816s" podCreationTimestamp="2025-11-21 14:13:05 +0000 UTC" firstStartedPulling="2025-11-21 14:13:06.797394682 +0000 UTC m=+2463.523809399" lastFinishedPulling="2025-11-21 14:13:07.230389659 +0000 UTC m=+2463.956804396" observedRunningTime="2025-11-21 14:13:07.877134606 +0000 UTC m=+2464.603549343" watchObservedRunningTime="2025-11-21 14:13:07.885490816 +0000 UTC m=+2464.611905553" Nov 21 14:13:14 crc kubenswrapper[4675]: I1121 14:13:14.952711 4675 generic.go:334] "Generic (PLEG): container finished" podID="7c34957b-d4df-4448-9396-9e7244dc85b5" containerID="8a6d0334aab4f7d7455b3a7bb4c889ac3fdb30764312dd91bafdc6f33fd36fb7" exitCode=0 Nov 21 14:13:14 crc kubenswrapper[4675]: I1121 14:13:14.952799 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" event={"ID":"7c34957b-d4df-4448-9396-9e7244dc85b5","Type":"ContainerDied","Data":"8a6d0334aab4f7d7455b3a7bb4c889ac3fdb30764312dd91bafdc6f33fd36fb7"} Nov 21 14:13:16 crc kubenswrapper[4675]: I1121 14:13:16.438948 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" Nov 21 14:13:16 crc kubenswrapper[4675]: I1121 14:13:16.545795 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5w6h\" (UniqueName: \"kubernetes.io/projected/7c34957b-d4df-4448-9396-9e7244dc85b5-kube-api-access-s5w6h\") pod \"7c34957b-d4df-4448-9396-9e7244dc85b5\" (UID: \"7c34957b-d4df-4448-9396-9e7244dc85b5\") " Nov 21 14:13:16 crc kubenswrapper[4675]: I1121 14:13:16.545849 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7c34957b-d4df-4448-9396-9e7244dc85b5-inventory-0\") pod \"7c34957b-d4df-4448-9396-9e7244dc85b5\" (UID: \"7c34957b-d4df-4448-9396-9e7244dc85b5\") " Nov 21 14:13:16 crc kubenswrapper[4675]: I1121 14:13:16.545957 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c34957b-d4df-4448-9396-9e7244dc85b5-ssh-key-openstack-edpm-ipam\") pod \"7c34957b-d4df-4448-9396-9e7244dc85b5\" (UID: \"7c34957b-d4df-4448-9396-9e7244dc85b5\") " Nov 21 14:13:16 crc kubenswrapper[4675]: I1121 14:13:16.555274 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c34957b-d4df-4448-9396-9e7244dc85b5-kube-api-access-s5w6h" (OuterVolumeSpecName: "kube-api-access-s5w6h") pod "7c34957b-d4df-4448-9396-9e7244dc85b5" (UID: "7c34957b-d4df-4448-9396-9e7244dc85b5"). InnerVolumeSpecName "kube-api-access-s5w6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:13:16 crc kubenswrapper[4675]: I1121 14:13:16.576516 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c34957b-d4df-4448-9396-9e7244dc85b5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7c34957b-d4df-4448-9396-9e7244dc85b5" (UID: "7c34957b-d4df-4448-9396-9e7244dc85b5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:13:16 crc kubenswrapper[4675]: I1121 14:13:16.586263 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c34957b-d4df-4448-9396-9e7244dc85b5-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "7c34957b-d4df-4448-9396-9e7244dc85b5" (UID: "7c34957b-d4df-4448-9396-9e7244dc85b5"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:13:16 crc kubenswrapper[4675]: I1121 14:13:16.648887 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5w6h\" (UniqueName: \"kubernetes.io/projected/7c34957b-d4df-4448-9396-9e7244dc85b5-kube-api-access-s5w6h\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:16 crc kubenswrapper[4675]: I1121 14:13:16.648923 4675 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7c34957b-d4df-4448-9396-9e7244dc85b5-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:16 crc kubenswrapper[4675]: I1121 14:13:16.648937 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c34957b-d4df-4448-9396-9e7244dc85b5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:16 crc kubenswrapper[4675]: I1121 14:13:16.978523 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" event={"ID":"7c34957b-d4df-4448-9396-9e7244dc85b5","Type":"ContainerDied","Data":"9633ae300e0a4ae9286c4afc349bc744332e403cfcb22ea5c51ce99223b226ea"} Nov 21 14:13:16 crc kubenswrapper[4675]: I1121 14:13:16.978570 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9633ae300e0a4ae9286c4afc349bc744332e403cfcb22ea5c51ce99223b226ea" Nov 21 14:13:16 crc kubenswrapper[4675]: I1121 14:13:16.978568 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dpdgb" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.059515 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7"] Nov 21 14:13:17 crc kubenswrapper[4675]: E1121 14:13:17.060553 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c34957b-d4df-4448-9396-9e7244dc85b5" containerName="ssh-known-hosts-edpm-deployment" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.060578 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c34957b-d4df-4448-9396-9e7244dc85b5" containerName="ssh-known-hosts-edpm-deployment" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.060891 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c34957b-d4df-4448-9396-9e7244dc85b5" containerName="ssh-known-hosts-edpm-deployment" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.061793 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.064567 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.064806 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.068560 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.069561 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.081984 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7"] Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.159288 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d47864c9-0269-47a5-b718-bce3541df7c5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rblk7\" (UID: \"d47864c9-0269-47a5-b718-bce3541df7c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.159340 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d47864c9-0269-47a5-b718-bce3541df7c5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rblk7\" (UID: \"d47864c9-0269-47a5-b718-bce3541df7c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.159782 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2r78\" (UniqueName: \"kubernetes.io/projected/d47864c9-0269-47a5-b718-bce3541df7c5-kube-api-access-d2r78\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rblk7\" (UID: \"d47864c9-0269-47a5-b718-bce3541df7c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.262451 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d47864c9-0269-47a5-b718-bce3541df7c5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rblk7\" (UID: \"d47864c9-0269-47a5-b718-bce3541df7c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.262511 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d47864c9-0269-47a5-b718-bce3541df7c5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rblk7\" (UID: \"d47864c9-0269-47a5-b718-bce3541df7c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.262643 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2r78\" (UniqueName: \"kubernetes.io/projected/d47864c9-0269-47a5-b718-bce3541df7c5-kube-api-access-d2r78\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rblk7\" (UID: \"d47864c9-0269-47a5-b718-bce3541df7c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.266157 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d47864c9-0269-47a5-b718-bce3541df7c5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rblk7\" (UID: \"d47864c9-0269-47a5-b718-bce3541df7c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.277942 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2r78\" (UniqueName: \"kubernetes.io/projected/d47864c9-0269-47a5-b718-bce3541df7c5-kube-api-access-d2r78\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rblk7\" (UID: \"d47864c9-0269-47a5-b718-bce3541df7c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.303565 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d47864c9-0269-47a5-b718-bce3541df7c5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rblk7\" (UID: \"d47864c9-0269-47a5-b718-bce3541df7c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.384340 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.957622 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7"] Nov 21 14:13:17 crc kubenswrapper[4675]: I1121 14:13:17.989243 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" event={"ID":"d47864c9-0269-47a5-b718-bce3541df7c5","Type":"ContainerStarted","Data":"db9e6ccffce8515d692dc43e6a743cfa6d72297ea9af8bef03fb709a35f68a84"} Nov 21 14:13:18 crc kubenswrapper[4675]: I1121 14:13:18.850221 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:13:18 crc kubenswrapper[4675]: E1121 14:13:18.851630 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:13:19 crc kubenswrapper[4675]: I1121 14:13:19.012499 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" event={"ID":"d47864c9-0269-47a5-b718-bce3541df7c5","Type":"ContainerStarted","Data":"d13444ab6a953ab346783c71dc784208f488c8e448338450c053e77acdcebfa9"} Nov 21 14:13:19 crc kubenswrapper[4675]: I1121 14:13:19.034679 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" podStartSLOduration=1.5240823529999998 podStartE2EDuration="2.034656256s" podCreationTimestamp="2025-11-21 14:13:17 +0000 UTC" firstStartedPulling="2025-11-21 14:13:17.961532299 +0000 UTC m=+2474.687947026" lastFinishedPulling="2025-11-21 14:13:18.472106202 +0000 UTC m=+2475.198520929" observedRunningTime="2025-11-21 14:13:19.028524581 +0000 UTC m=+2475.754939308" watchObservedRunningTime="2025-11-21 14:13:19.034656256 +0000 UTC m=+2475.761070983" Nov 21 14:13:27 crc kubenswrapper[4675]: I1121 14:13:27.086801 4675 generic.go:334] "Generic (PLEG): container finished" podID="d47864c9-0269-47a5-b718-bce3541df7c5" containerID="d13444ab6a953ab346783c71dc784208f488c8e448338450c053e77acdcebfa9" exitCode=0 Nov 21 14:13:27 crc kubenswrapper[4675]: I1121 14:13:27.086896 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" event={"ID":"d47864c9-0269-47a5-b718-bce3541df7c5","Type":"ContainerDied","Data":"d13444ab6a953ab346783c71dc784208f488c8e448338450c053e77acdcebfa9"} Nov 21 14:13:28 crc kubenswrapper[4675]: I1121 14:13:28.531045 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" Nov 21 14:13:28 crc kubenswrapper[4675]: I1121 14:13:28.565946 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d47864c9-0269-47a5-b718-bce3541df7c5-ssh-key\") pod \"d47864c9-0269-47a5-b718-bce3541df7c5\" (UID: \"d47864c9-0269-47a5-b718-bce3541df7c5\") " Nov 21 14:13:28 crc kubenswrapper[4675]: I1121 14:13:28.566027 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d47864c9-0269-47a5-b718-bce3541df7c5-inventory\") pod \"d47864c9-0269-47a5-b718-bce3541df7c5\" (UID: \"d47864c9-0269-47a5-b718-bce3541df7c5\") " Nov 21 14:13:28 crc kubenswrapper[4675]: I1121 14:13:28.566336 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2r78\" (UniqueName: \"kubernetes.io/projected/d47864c9-0269-47a5-b718-bce3541df7c5-kube-api-access-d2r78\") pod \"d47864c9-0269-47a5-b718-bce3541df7c5\" (UID: \"d47864c9-0269-47a5-b718-bce3541df7c5\") " Nov 21 14:13:28 crc kubenswrapper[4675]: I1121 14:13:28.576419 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d47864c9-0269-47a5-b718-bce3541df7c5-kube-api-access-d2r78" (OuterVolumeSpecName: "kube-api-access-d2r78") pod "d47864c9-0269-47a5-b718-bce3541df7c5" (UID: "d47864c9-0269-47a5-b718-bce3541df7c5"). InnerVolumeSpecName "kube-api-access-d2r78". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:13:28 crc kubenswrapper[4675]: I1121 14:13:28.607475 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47864c9-0269-47a5-b718-bce3541df7c5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d47864c9-0269-47a5-b718-bce3541df7c5" (UID: "d47864c9-0269-47a5-b718-bce3541df7c5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:13:28 crc kubenswrapper[4675]: I1121 14:13:28.607858 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47864c9-0269-47a5-b718-bce3541df7c5-inventory" (OuterVolumeSpecName: "inventory") pod "d47864c9-0269-47a5-b718-bce3541df7c5" (UID: "d47864c9-0269-47a5-b718-bce3541df7c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:13:28 crc kubenswrapper[4675]: I1121 14:13:28.669683 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d47864c9-0269-47a5-b718-bce3541df7c5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:28 crc kubenswrapper[4675]: I1121 14:13:28.669730 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d47864c9-0269-47a5-b718-bce3541df7c5-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:28 crc kubenswrapper[4675]: I1121 14:13:28.669744 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2r78\" (UniqueName: \"kubernetes.io/projected/d47864c9-0269-47a5-b718-bce3541df7c5-kube-api-access-d2r78\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.116512 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" event={"ID":"d47864c9-0269-47a5-b718-bce3541df7c5","Type":"ContainerDied","Data":"db9e6ccffce8515d692dc43e6a743cfa6d72297ea9af8bef03fb709a35f68a84"} Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.116558 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db9e6ccffce8515d692dc43e6a743cfa6d72297ea9af8bef03fb709a35f68a84" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.116574 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rblk7" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.179591 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22"] Nov 21 14:13:29 crc kubenswrapper[4675]: E1121 14:13:29.180185 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d47864c9-0269-47a5-b718-bce3541df7c5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.180206 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d47864c9-0269-47a5-b718-bce3541df7c5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.180481 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d47864c9-0269-47a5-b718-bce3541df7c5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.181360 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.184180 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.184355 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.184554 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.186697 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.190927 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22"] Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.283678 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea3b33b3-6f01-403c-87ed-3c0727db2a97-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22\" (UID: \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.283755 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbczl\" (UniqueName: \"kubernetes.io/projected/ea3b33b3-6f01-403c-87ed-3c0727db2a97-kube-api-access-bbczl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22\" (UID: \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.283837 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea3b33b3-6f01-403c-87ed-3c0727db2a97-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22\" (UID: \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.386179 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea3b33b3-6f01-403c-87ed-3c0727db2a97-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22\" (UID: \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.386312 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbczl\" (UniqueName: \"kubernetes.io/projected/ea3b33b3-6f01-403c-87ed-3c0727db2a97-kube-api-access-bbczl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22\" (UID: \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.386441 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea3b33b3-6f01-403c-87ed-3c0727db2a97-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22\" (UID: \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.391873 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea3b33b3-6f01-403c-87ed-3c0727db2a97-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22\" (UID: \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.399689 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea3b33b3-6f01-403c-87ed-3c0727db2a97-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22\" (UID: \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.406878 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbczl\" (UniqueName: \"kubernetes.io/projected/ea3b33b3-6f01-403c-87ed-3c0727db2a97-kube-api-access-bbczl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22\" (UID: \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" Nov 21 14:13:29 crc kubenswrapper[4675]: I1121 14:13:29.521086 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" Nov 21 14:13:30 crc kubenswrapper[4675]: I1121 14:13:30.084151 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22"] Nov 21 14:13:30 crc kubenswrapper[4675]: I1121 14:13:30.136813 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" event={"ID":"ea3b33b3-6f01-403c-87ed-3c0727db2a97","Type":"ContainerStarted","Data":"0fd11d4971424869f6a43e69762750cc13f98a2507ef0513229b05e2bde20179"} Nov 21 14:13:31 crc kubenswrapper[4675]: I1121 14:13:31.147919 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" event={"ID":"ea3b33b3-6f01-403c-87ed-3c0727db2a97","Type":"ContainerStarted","Data":"23bfc0359a76b2ff7be9aac8c6c3e1620763f4d54943036ef1c36bece464ec8d"} Nov 21 14:13:31 crc kubenswrapper[4675]: I1121 14:13:31.162994 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" podStartSLOduration=1.593225788 podStartE2EDuration="2.162971843s" podCreationTimestamp="2025-11-21 14:13:29 +0000 UTC" firstStartedPulling="2025-11-21 14:13:30.090606045 +0000 UTC m=+2486.817020772" lastFinishedPulling="2025-11-21 14:13:30.6603521 +0000 UTC m=+2487.386766827" observedRunningTime="2025-11-21 14:13:31.162352177 +0000 UTC m=+2487.888766904" watchObservedRunningTime="2025-11-21 14:13:31.162971843 +0000 UTC m=+2487.889386580" Nov 21 14:13:31 crc kubenswrapper[4675]: I1121 14:13:31.849460 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:13:31 crc kubenswrapper[4675]: E1121 14:13:31.849844 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:13:32 crc kubenswrapper[4675]: I1121 14:13:32.052051 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-7dswk"] Nov 21 14:13:32 crc kubenswrapper[4675]: I1121 14:13:32.064192 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-7dswk"] Nov 21 14:13:32 crc kubenswrapper[4675]: I1121 14:13:32.867158 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcf5e4dd-3414-4e74-a64e-94403684c91b" path="/var/lib/kubelet/pods/bcf5e4dd-3414-4e74-a64e-94403684c91b/volumes" Nov 21 14:13:40 crc kubenswrapper[4675]: I1121 14:13:40.247419 4675 generic.go:334] "Generic (PLEG): container finished" podID="ea3b33b3-6f01-403c-87ed-3c0727db2a97" containerID="23bfc0359a76b2ff7be9aac8c6c3e1620763f4d54943036ef1c36bece464ec8d" exitCode=0 Nov 21 14:13:40 crc kubenswrapper[4675]: I1121 14:13:40.247495 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" event={"ID":"ea3b33b3-6f01-403c-87ed-3c0727db2a97","Type":"ContainerDied","Data":"23bfc0359a76b2ff7be9aac8c6c3e1620763f4d54943036ef1c36bece464ec8d"} Nov 21 14:13:41 crc kubenswrapper[4675]: I1121 14:13:41.753060 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" Nov 21 14:13:41 crc kubenswrapper[4675]: I1121 14:13:41.877653 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea3b33b3-6f01-403c-87ed-3c0727db2a97-inventory\") pod \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\" (UID: \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\") " Nov 21 14:13:41 crc kubenswrapper[4675]: I1121 14:13:41.878517 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea3b33b3-6f01-403c-87ed-3c0727db2a97-ssh-key\") pod \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\" (UID: \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\") " Nov 21 14:13:41 crc kubenswrapper[4675]: I1121 14:13:41.878715 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbczl\" (UniqueName: \"kubernetes.io/projected/ea3b33b3-6f01-403c-87ed-3c0727db2a97-kube-api-access-bbczl\") pod \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\" (UID: \"ea3b33b3-6f01-403c-87ed-3c0727db2a97\") " Nov 21 14:13:41 crc kubenswrapper[4675]: I1121 14:13:41.883300 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3b33b3-6f01-403c-87ed-3c0727db2a97-kube-api-access-bbczl" (OuterVolumeSpecName: "kube-api-access-bbczl") pod "ea3b33b3-6f01-403c-87ed-3c0727db2a97" (UID: "ea3b33b3-6f01-403c-87ed-3c0727db2a97"). InnerVolumeSpecName "kube-api-access-bbczl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:13:41 crc kubenswrapper[4675]: I1121 14:13:41.914533 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3b33b3-6f01-403c-87ed-3c0727db2a97-inventory" (OuterVolumeSpecName: "inventory") pod "ea3b33b3-6f01-403c-87ed-3c0727db2a97" (UID: "ea3b33b3-6f01-403c-87ed-3c0727db2a97"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:13:41 crc kubenswrapper[4675]: I1121 14:13:41.918958 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3b33b3-6f01-403c-87ed-3c0727db2a97-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ea3b33b3-6f01-403c-87ed-3c0727db2a97" (UID: "ea3b33b3-6f01-403c-87ed-3c0727db2a97"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:13:41 crc kubenswrapper[4675]: I1121 14:13:41.982800 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea3b33b3-6f01-403c-87ed-3c0727db2a97-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:41 crc kubenswrapper[4675]: I1121 14:13:41.982843 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea3b33b3-6f01-403c-87ed-3c0727db2a97-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:41 crc kubenswrapper[4675]: I1121 14:13:41.982864 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbczl\" (UniqueName: \"kubernetes.io/projected/ea3b33b3-6f01-403c-87ed-3c0727db2a97-kube-api-access-bbczl\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.276258 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" event={"ID":"ea3b33b3-6f01-403c-87ed-3c0727db2a97","Type":"ContainerDied","Data":"0fd11d4971424869f6a43e69762750cc13f98a2507ef0513229b05e2bde20179"} Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.276309 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fd11d4971424869f6a43e69762750cc13f98a2507ef0513229b05e2bde20179" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.276358 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.373126 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6"] Nov 21 14:13:42 crc kubenswrapper[4675]: E1121 14:13:42.373678 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3b33b3-6f01-403c-87ed-3c0727db2a97" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.373699 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3b33b3-6f01-403c-87ed-3c0727db2a97" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.373931 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3b33b3-6f01-403c-87ed-3c0727db2a97" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.374760 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.377571 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.377592 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.377753 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.377863 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.377896 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.378117 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.378127 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.380682 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.401779 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.402104 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6"] Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.495040 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.495107 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.495144 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.495187 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.495491 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.495550 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjmj9\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-kube-api-access-kjmj9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.495717 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.495884 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.496026 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.496204 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.496302 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.496467 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.496630 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.496707 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.496842 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.496964 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.599864 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.599990 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.600027 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.600145 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.600184 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.600225 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.600254 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.600282 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.600334 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.600426 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.600476 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjmj9\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-kube-api-access-kjmj9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.600556 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.600600 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.600637 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.600682 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.600723 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.604281 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.604423 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.604568 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.604853 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.606017 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.606172 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.607100 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.607368 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.608156 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.608566 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.609132 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.609254 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.610354 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.611155 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.611333 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.621381 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjmj9\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-kube-api-access-kjmj9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l69j6\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.697364 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:13:42 crc kubenswrapper[4675]: I1121 14:13:42.851475 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:13:42 crc kubenswrapper[4675]: E1121 14:13:42.854258 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:13:43 crc kubenswrapper[4675]: I1121 14:13:43.300859 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6"] Nov 21 14:13:44 crc kubenswrapper[4675]: I1121 14:13:44.302087 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" event={"ID":"cfe1a316-0dad-402c-b056-2302e5fe219a","Type":"ContainerStarted","Data":"61f1a5c5afbb572c580dea35e997d4aaf011aa91647a6742faa1b903e73f91e5"} Nov 21 14:13:45 crc kubenswrapper[4675]: I1121 14:13:45.315430 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" event={"ID":"cfe1a316-0dad-402c-b056-2302e5fe219a","Type":"ContainerStarted","Data":"f099f7def543cd90550322319ccbaf0cba87f573ea613e66550701adf7d51f74"} Nov 21 14:13:45 crc kubenswrapper[4675]: I1121 14:13:45.342427 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" podStartSLOduration=1.845930012 podStartE2EDuration="3.342400813s" podCreationTimestamp="2025-11-21 14:13:42 +0000 UTC" firstStartedPulling="2025-11-21 14:13:43.303081255 +0000 UTC m=+2500.029495982" lastFinishedPulling="2025-11-21 14:13:44.799552036 +0000 UTC m=+2501.525966783" observedRunningTime="2025-11-21 14:13:45.335751815 +0000 UTC m=+2502.062166552" watchObservedRunningTime="2025-11-21 14:13:45.342400813 +0000 UTC m=+2502.068815560" Nov 21 14:13:55 crc kubenswrapper[4675]: I1121 14:13:55.849480 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:13:55 crc kubenswrapper[4675]: E1121 14:13:55.850408 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:14:10 crc kubenswrapper[4675]: I1121 14:14:10.848725 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:14:10 crc kubenswrapper[4675]: E1121 14:14:10.849460 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:14:12 crc kubenswrapper[4675]: I1121 14:14:12.058845 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-j42fk"] Nov 21 14:14:12 crc kubenswrapper[4675]: I1121 14:14:12.072969 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-j42fk"] Nov 21 14:14:12 crc kubenswrapper[4675]: I1121 14:14:12.863152 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a3d3c29-d332-4c74-af5e-60e1f2eea6f4" path="/var/lib/kubelet/pods/8a3d3c29-d332-4c74-af5e-60e1f2eea6f4/volumes" Nov 21 14:14:23 crc kubenswrapper[4675]: I1121 14:14:23.849721 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:14:23 crc kubenswrapper[4675]: E1121 14:14:23.850890 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:14:25 crc kubenswrapper[4675]: I1121 14:14:25.684786 4675 scope.go:117] "RemoveContainer" containerID="6c92cfbbd49a132d32cbbd6b165f4a8201904ad80a7b5f9f186760d12f4f1ff7" Nov 21 14:14:25 crc kubenswrapper[4675]: I1121 14:14:25.738358 4675 scope.go:117] "RemoveContainer" containerID="c6f2bd1c3b48e73def5381dee1f61982c11a69f6da12e9b017d86bc996baf745" Nov 21 14:14:26 crc kubenswrapper[4675]: I1121 14:14:26.772615 4675 generic.go:334] "Generic (PLEG): container finished" podID="cfe1a316-0dad-402c-b056-2302e5fe219a" containerID="f099f7def543cd90550322319ccbaf0cba87f573ea613e66550701adf7d51f74" exitCode=0 Nov 21 14:14:26 crc kubenswrapper[4675]: I1121 14:14:26.772809 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" event={"ID":"cfe1a316-0dad-402c-b056-2302e5fe219a","Type":"ContainerDied","Data":"f099f7def543cd90550322319ccbaf0cba87f573ea613e66550701adf7d51f74"} Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.235203 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.363982 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.364171 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-repo-setup-combined-ca-bundle\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.364211 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.364248 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjmj9\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-kube-api-access-kjmj9\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.364304 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-telemetry-power-monitoring-combined-ca-bundle\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.364333 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-libvirt-combined-ca-bundle\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.364378 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-telemetry-combined-ca-bundle\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.364406 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-nova-combined-ca-bundle\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.364474 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-bootstrap-combined-ca-bundle\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.364641 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-ssh-key\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.364672 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-neutron-metadata-combined-ca-bundle\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.364714 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.364802 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-inventory\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.365351 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-ovn-combined-ca-bundle\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.365398 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.365555 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"cfe1a316-0dad-402c-b056-2302e5fe219a\" (UID: \"cfe1a316-0dad-402c-b056-2302e5fe219a\") " Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.371765 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.372691 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.372898 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.373248 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.374416 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.374868 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.374895 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.375027 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.375593 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.375611 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.375626 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.375938 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-kube-api-access-kjmj9" (OuterVolumeSpecName: "kube-api-access-kjmj9") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "kube-api-access-kjmj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.377675 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.377699 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.402525 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.402980 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-inventory" (OuterVolumeSpecName: "inventory") pod "cfe1a316-0dad-402c-b056-2302e5fe219a" (UID: "cfe1a316-0dad-402c-b056-2302e5fe219a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468736 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468820 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468836 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468853 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468869 4675 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468888 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468902 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjmj9\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-kube-api-access-kjmj9\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468915 4675 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468929 4675 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468941 4675 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468953 4675 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468961 4675 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468970 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468980 4675 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.468991 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cfe1a316-0dad-402c-b056-2302e5fe219a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.469000 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfe1a316-0dad-402c-b056-2302e5fe219a-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.799510 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" event={"ID":"cfe1a316-0dad-402c-b056-2302e5fe219a","Type":"ContainerDied","Data":"61f1a5c5afbb572c580dea35e997d4aaf011aa91647a6742faa1b903e73f91e5"} Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.799551 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61f1a5c5afbb572c580dea35e997d4aaf011aa91647a6742faa1b903e73f91e5" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.799608 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l69j6" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.963851 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7"] Nov 21 14:14:28 crc kubenswrapper[4675]: E1121 14:14:28.964468 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe1a316-0dad-402c-b056-2302e5fe219a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.964488 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe1a316-0dad-402c-b056-2302e5fe219a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.964772 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe1a316-0dad-402c-b056-2302e5fe219a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.966274 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.971729 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.972174 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.972456 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.972756 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.975145 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 21 14:14:28 crc kubenswrapper[4675]: I1121 14:14:28.994135 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7"] Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.105885 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.106243 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.106549 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.106608 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cd5e1c55-691e-40cc-9e53-b905864402fb-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.106757 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x5ng\" (UniqueName: \"kubernetes.io/projected/cd5e1c55-691e-40cc-9e53-b905864402fb-kube-api-access-4x5ng\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.209721 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.209807 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cd5e1c55-691e-40cc-9e53-b905864402fb-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.209886 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x5ng\" (UniqueName: \"kubernetes.io/projected/cd5e1c55-691e-40cc-9e53-b905864402fb-kube-api-access-4x5ng\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.209985 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.210110 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.211148 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cd5e1c55-691e-40cc-9e53-b905864402fb-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.214847 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.215893 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.216423 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.227903 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x5ng\" (UniqueName: \"kubernetes.io/projected/cd5e1c55-691e-40cc-9e53-b905864402fb-kube-api-access-4x5ng\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rvbj7\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.290811 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:14:29 crc kubenswrapper[4675]: I1121 14:14:29.853091 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7"] Nov 21 14:14:30 crc kubenswrapper[4675]: I1121 14:14:30.822561 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" event={"ID":"cd5e1c55-691e-40cc-9e53-b905864402fb","Type":"ContainerStarted","Data":"5036e87615b497626eae2714408436a34155baa497cdf431c65f112d69db0ad7"} Nov 21 14:14:30 crc kubenswrapper[4675]: I1121 14:14:30.822883 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" event={"ID":"cd5e1c55-691e-40cc-9e53-b905864402fb","Type":"ContainerStarted","Data":"f3996fd36520c2250949e27dbf9f9ca86e9e189a6d6c08b0586dd7a8ea751c48"} Nov 21 14:14:30 crc kubenswrapper[4675]: I1121 14:14:30.852333 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" podStartSLOduration=2.450909663 podStartE2EDuration="2.852304013s" podCreationTimestamp="2025-11-21 14:14:28 +0000 UTC" firstStartedPulling="2025-11-21 14:14:29.858273951 +0000 UTC m=+2546.584688678" lastFinishedPulling="2025-11-21 14:14:30.259668311 +0000 UTC m=+2546.986083028" observedRunningTime="2025-11-21 14:14:30.83948996 +0000 UTC m=+2547.565904707" watchObservedRunningTime="2025-11-21 14:14:30.852304013 +0000 UTC m=+2547.578718750" Nov 21 14:14:35 crc kubenswrapper[4675]: I1121 14:14:35.849464 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:14:35 crc kubenswrapper[4675]: E1121 14:14:35.850493 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:14:47 crc kubenswrapper[4675]: I1121 14:14:47.848740 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:14:47 crc kubenswrapper[4675]: E1121 14:14:47.849661 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:14:58 crc kubenswrapper[4675]: I1121 14:14:58.849664 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:14:58 crc kubenswrapper[4675]: E1121 14:14:58.850441 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.164691 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4"] Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.167672 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.170336 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.171869 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.180887 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4"] Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.205301 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr7xw\" (UniqueName: \"kubernetes.io/projected/0bc38a92-f126-4d9b-9500-3f98029d5cfe-kube-api-access-vr7xw\") pod \"collect-profiles-29395575-wbnw4\" (UID: \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.205558 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bc38a92-f126-4d9b-9500-3f98029d5cfe-config-volume\") pod \"collect-profiles-29395575-wbnw4\" (UID: \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.205798 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bc38a92-f126-4d9b-9500-3f98029d5cfe-secret-volume\") pod \"collect-profiles-29395575-wbnw4\" (UID: \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.307287 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr7xw\" (UniqueName: \"kubernetes.io/projected/0bc38a92-f126-4d9b-9500-3f98029d5cfe-kube-api-access-vr7xw\") pod \"collect-profiles-29395575-wbnw4\" (UID: \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.307373 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bc38a92-f126-4d9b-9500-3f98029d5cfe-config-volume\") pod \"collect-profiles-29395575-wbnw4\" (UID: \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.307439 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bc38a92-f126-4d9b-9500-3f98029d5cfe-secret-volume\") pod \"collect-profiles-29395575-wbnw4\" (UID: \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.308607 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bc38a92-f126-4d9b-9500-3f98029d5cfe-config-volume\") pod \"collect-profiles-29395575-wbnw4\" (UID: \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.314916 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bc38a92-f126-4d9b-9500-3f98029d5cfe-secret-volume\") pod \"collect-profiles-29395575-wbnw4\" (UID: \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.328330 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr7xw\" (UniqueName: \"kubernetes.io/projected/0bc38a92-f126-4d9b-9500-3f98029d5cfe-kube-api-access-vr7xw\") pod \"collect-profiles-29395575-wbnw4\" (UID: \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" Nov 21 14:15:00 crc kubenswrapper[4675]: I1121 14:15:00.498797 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" Nov 21 14:15:01 crc kubenswrapper[4675]: I1121 14:15:01.041218 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4"] Nov 21 14:15:01 crc kubenswrapper[4675]: I1121 14:15:01.133324 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" event={"ID":"0bc38a92-f126-4d9b-9500-3f98029d5cfe","Type":"ContainerStarted","Data":"4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52"} Nov 21 14:15:02 crc kubenswrapper[4675]: I1121 14:15:02.143963 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" event={"ID":"0bc38a92-f126-4d9b-9500-3f98029d5cfe","Type":"ContainerStarted","Data":"00b1052d53e4038dc3e546a923555b382c0cb5c3b7f2b0f5ecef05c5908b06fd"} Nov 21 14:15:02 crc kubenswrapper[4675]: I1121 14:15:02.161823 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" podStartSLOduration=2.161806976 podStartE2EDuration="2.161806976s" podCreationTimestamp="2025-11-21 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:15:02.15921702 +0000 UTC m=+2578.885631737" watchObservedRunningTime="2025-11-21 14:15:02.161806976 +0000 UTC m=+2578.888221703" Nov 21 14:15:03 crc kubenswrapper[4675]: I1121 14:15:03.157951 4675 generic.go:334] "Generic (PLEG): container finished" podID="0bc38a92-f126-4d9b-9500-3f98029d5cfe" containerID="00b1052d53e4038dc3e546a923555b382c0cb5c3b7f2b0f5ecef05c5908b06fd" exitCode=0 Nov 21 14:15:03 crc kubenswrapper[4675]: I1121 14:15:03.158303 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" event={"ID":"0bc38a92-f126-4d9b-9500-3f98029d5cfe","Type":"ContainerDied","Data":"00b1052d53e4038dc3e546a923555b382c0cb5c3b7f2b0f5ecef05c5908b06fd"} Nov 21 14:15:04 crc kubenswrapper[4675]: I1121 14:15:04.670905 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" Nov 21 14:15:04 crc kubenswrapper[4675]: I1121 14:15:04.842496 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bc38a92-f126-4d9b-9500-3f98029d5cfe-config-volume\") pod \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\" (UID: \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\") " Nov 21 14:15:04 crc kubenswrapper[4675]: I1121 14:15:04.842739 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bc38a92-f126-4d9b-9500-3f98029d5cfe-secret-volume\") pod \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\" (UID: \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\") " Nov 21 14:15:04 crc kubenswrapper[4675]: I1121 14:15:04.842834 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr7xw\" (UniqueName: \"kubernetes.io/projected/0bc38a92-f126-4d9b-9500-3f98029d5cfe-kube-api-access-vr7xw\") pod \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\" (UID: \"0bc38a92-f126-4d9b-9500-3f98029d5cfe\") " Nov 21 14:15:04 crc kubenswrapper[4675]: I1121 14:15:04.851027 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc38a92-f126-4d9b-9500-3f98029d5cfe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0bc38a92-f126-4d9b-9500-3f98029d5cfe" (UID: "0bc38a92-f126-4d9b-9500-3f98029d5cfe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:15:04 crc kubenswrapper[4675]: I1121 14:15:04.859549 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bc38a92-f126-4d9b-9500-3f98029d5cfe-config-volume" (OuterVolumeSpecName: "config-volume") pod "0bc38a92-f126-4d9b-9500-3f98029d5cfe" (UID: "0bc38a92-f126-4d9b-9500-3f98029d5cfe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:15:04 crc kubenswrapper[4675]: I1121 14:15:04.865225 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc38a92-f126-4d9b-9500-3f98029d5cfe-kube-api-access-vr7xw" (OuterVolumeSpecName: "kube-api-access-vr7xw") pod "0bc38a92-f126-4d9b-9500-3f98029d5cfe" (UID: "0bc38a92-f126-4d9b-9500-3f98029d5cfe"). InnerVolumeSpecName "kube-api-access-vr7xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:15:04 crc kubenswrapper[4675]: I1121 14:15:04.946359 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bc38a92-f126-4d9b-9500-3f98029d5cfe-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:15:04 crc kubenswrapper[4675]: I1121 14:15:04.946388 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr7xw\" (UniqueName: \"kubernetes.io/projected/0bc38a92-f126-4d9b-9500-3f98029d5cfe-kube-api-access-vr7xw\") on node \"crc\" DevicePath \"\"" Nov 21 14:15:04 crc kubenswrapper[4675]: I1121 14:15:04.946397 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bc38a92-f126-4d9b-9500-3f98029d5cfe-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:15:05 crc kubenswrapper[4675]: I1121 14:15:05.179895 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" event={"ID":"0bc38a92-f126-4d9b-9500-3f98029d5cfe","Type":"ContainerDied","Data":"4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52"} Nov 21 14:15:05 crc kubenswrapper[4675]: I1121 14:15:05.180280 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52" Nov 21 14:15:05 crc kubenswrapper[4675]: I1121 14:15:05.179977 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4" Nov 21 14:15:05 crc kubenswrapper[4675]: I1121 14:15:05.247623 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l"] Nov 21 14:15:05 crc kubenswrapper[4675]: I1121 14:15:05.260713 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395530-gwk6l"] Nov 21 14:15:06 crc kubenswrapper[4675]: E1121 14:15:06.481124 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice/crio-4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52\": RecentStats: unable to find data in memory cache]" Nov 21 14:15:06 crc kubenswrapper[4675]: I1121 14:15:06.864990 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="531294e8-2e33-4f1f-848a-b2d19d8e6102" path="/var/lib/kubelet/pods/531294e8-2e33-4f1f-848a-b2d19d8e6102/volumes" Nov 21 14:15:09 crc kubenswrapper[4675]: I1121 14:15:09.849588 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:15:09 crc kubenswrapper[4675]: E1121 14:15:09.850584 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:15:13 crc kubenswrapper[4675]: E1121 14:15:13.338905 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice/crio-4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice\": RecentStats: unable to find data in memory cache]" Nov 21 14:15:16 crc kubenswrapper[4675]: E1121 14:15:16.525896 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice/crio-4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice\": RecentStats: unable to find data in memory cache]" Nov 21 14:15:21 crc kubenswrapper[4675]: I1121 14:15:21.849889 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:15:21 crc kubenswrapper[4675]: E1121 14:15:21.851527 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:15:25 crc kubenswrapper[4675]: I1121 14:15:25.853540 4675 scope.go:117] "RemoveContainer" containerID="f5815816075109f278c20b43f6361f7188aa2b491edd36ff307b3c09fe9f8a5a" Nov 21 14:15:25 crc kubenswrapper[4675]: I1121 14:15:25.888543 4675 scope.go:117] "RemoveContainer" containerID="1ecc1425ec73a2b1a36bc6992920d5642ef154f4e2d8a7ad1a5992dc49ea327b" Nov 21 14:15:25 crc kubenswrapper[4675]: I1121 14:15:25.944017 4675 scope.go:117] "RemoveContainer" containerID="e15c9cf15d7df97bdf3db55be7c2ede622315938610a97f67e07e50350231a79" Nov 21 14:15:26 crc kubenswrapper[4675]: I1121 14:15:26.013941 4675 scope.go:117] "RemoveContainer" containerID="178ef29ab8e9018f748325c2eaf903c5b00be649254b1e636763ab476ae73211" Nov 21 14:15:26 crc kubenswrapper[4675]: E1121 14:15:26.820858 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice/crio-4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice\": RecentStats: unable to find data in memory cache]" Nov 21 14:15:28 crc kubenswrapper[4675]: E1121 14:15:28.067503 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice/crio-4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52\": RecentStats: unable to find data in memory cache]" Nov 21 14:15:32 crc kubenswrapper[4675]: I1121 14:15:32.473374 4675 generic.go:334] "Generic (PLEG): container finished" podID="cd5e1c55-691e-40cc-9e53-b905864402fb" containerID="5036e87615b497626eae2714408436a34155baa497cdf431c65f112d69db0ad7" exitCode=0 Nov 21 14:15:32 crc kubenswrapper[4675]: I1121 14:15:32.473459 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" event={"ID":"cd5e1c55-691e-40cc-9e53-b905864402fb","Type":"ContainerDied","Data":"5036e87615b497626eae2714408436a34155baa497cdf431c65f112d69db0ad7"} Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.011364 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.139544 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x5ng\" (UniqueName: \"kubernetes.io/projected/cd5e1c55-691e-40cc-9e53-b905864402fb-kube-api-access-4x5ng\") pod \"cd5e1c55-691e-40cc-9e53-b905864402fb\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.139596 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cd5e1c55-691e-40cc-9e53-b905864402fb-ovncontroller-config-0\") pod \"cd5e1c55-691e-40cc-9e53-b905864402fb\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.139648 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-ovn-combined-ca-bundle\") pod \"cd5e1c55-691e-40cc-9e53-b905864402fb\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.139749 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-inventory\") pod \"cd5e1c55-691e-40cc-9e53-b905864402fb\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.139879 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-ssh-key\") pod \"cd5e1c55-691e-40cc-9e53-b905864402fb\" (UID: \"cd5e1c55-691e-40cc-9e53-b905864402fb\") " Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.148176 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "cd5e1c55-691e-40cc-9e53-b905864402fb" (UID: "cd5e1c55-691e-40cc-9e53-b905864402fb"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.149341 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd5e1c55-691e-40cc-9e53-b905864402fb-kube-api-access-4x5ng" (OuterVolumeSpecName: "kube-api-access-4x5ng") pod "cd5e1c55-691e-40cc-9e53-b905864402fb" (UID: "cd5e1c55-691e-40cc-9e53-b905864402fb"). InnerVolumeSpecName "kube-api-access-4x5ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.176808 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-inventory" (OuterVolumeSpecName: "inventory") pod "cd5e1c55-691e-40cc-9e53-b905864402fb" (UID: "cd5e1c55-691e-40cc-9e53-b905864402fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.180512 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd5e1c55-691e-40cc-9e53-b905864402fb-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "cd5e1c55-691e-40cc-9e53-b905864402fb" (UID: "cd5e1c55-691e-40cc-9e53-b905864402fb"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.189750 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cd5e1c55-691e-40cc-9e53-b905864402fb" (UID: "cd5e1c55-691e-40cc-9e53-b905864402fb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.243387 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x5ng\" (UniqueName: \"kubernetes.io/projected/cd5e1c55-691e-40cc-9e53-b905864402fb-kube-api-access-4x5ng\") on node \"crc\" DevicePath \"\"" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.243419 4675 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cd5e1c55-691e-40cc-9e53-b905864402fb-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.243428 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.243437 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.243445 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd5e1c55-691e-40cc-9e53-b905864402fb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.498279 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" event={"ID":"cd5e1c55-691e-40cc-9e53-b905864402fb","Type":"ContainerDied","Data":"f3996fd36520c2250949e27dbf9f9ca86e9e189a6d6c08b0586dd7a8ea751c48"} Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.498327 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rvbj7" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.498331 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3996fd36520c2250949e27dbf9f9ca86e9e189a6d6c08b0586dd7a8ea751c48" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.641299 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz"] Nov 21 14:15:34 crc kubenswrapper[4675]: E1121 14:15:34.643281 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc38a92-f126-4d9b-9500-3f98029d5cfe" containerName="collect-profiles" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.643305 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc38a92-f126-4d9b-9500-3f98029d5cfe" containerName="collect-profiles" Nov 21 14:15:34 crc kubenswrapper[4675]: E1121 14:15:34.643320 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5e1c55-691e-40cc-9e53-b905864402fb" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.643329 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5e1c55-691e-40cc-9e53-b905864402fb" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.643611 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc38a92-f126-4d9b-9500-3f98029d5cfe" containerName="collect-profiles" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.643640 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5e1c55-691e-40cc-9e53-b905864402fb" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.644491 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.647301 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.647452 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.647614 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.647735 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.647874 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.648001 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.678545 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz"] Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.755543 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq9rp\" (UniqueName: \"kubernetes.io/projected/50a8108e-2cd1-42e7-9efe-5c2478adb797-kube-api-access-cq9rp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.755602 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.755675 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.755897 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.756007 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.756032 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.868369 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq9rp\" (UniqueName: \"kubernetes.io/projected/50a8108e-2cd1-42e7-9efe-5c2478adb797-kube-api-access-cq9rp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.868558 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.868821 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.868861 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.868940 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.868985 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.873161 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.876757 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.877364 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.884644 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.890783 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.895538 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq9rp\" (UniqueName: \"kubernetes.io/projected/50a8108e-2cd1-42e7-9efe-5c2478adb797-kube-api-access-cq9rp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:34 crc kubenswrapper[4675]: I1121 14:15:34.975416 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:15:35 crc kubenswrapper[4675]: I1121 14:15:35.577588 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz"] Nov 21 14:15:36 crc kubenswrapper[4675]: I1121 14:15:36.533912 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" event={"ID":"50a8108e-2cd1-42e7-9efe-5c2478adb797","Type":"ContainerStarted","Data":"d5bc6c0d91789977ef1585117c15a203b36ea8f1e2a77869db6a18c29c46f8df"} Nov 21 14:15:36 crc kubenswrapper[4675]: I1121 14:15:36.849041 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:15:36 crc kubenswrapper[4675]: E1121 14:15:36.849632 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:15:37 crc kubenswrapper[4675]: E1121 14:15:37.140143 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice/crio-4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52\": RecentStats: unable to find data in memory cache]" Nov 21 14:15:37 crc kubenswrapper[4675]: I1121 14:15:37.547516 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" event={"ID":"50a8108e-2cd1-42e7-9efe-5c2478adb797","Type":"ContainerStarted","Data":"ac429ada3b410158805a447c4bd32f98e5588ae2a28343ce3bbd0f5ce1d30200"} Nov 21 14:15:37 crc kubenswrapper[4675]: I1121 14:15:37.568343 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" podStartSLOduration=2.915973106 podStartE2EDuration="3.568322684s" podCreationTimestamp="2025-11-21 14:15:34 +0000 UTC" firstStartedPulling="2025-11-21 14:15:35.58539375 +0000 UTC m=+2612.311808467" lastFinishedPulling="2025-11-21 14:15:36.237743318 +0000 UTC m=+2612.964158045" observedRunningTime="2025-11-21 14:15:37.56300223 +0000 UTC m=+2614.289416967" watchObservedRunningTime="2025-11-21 14:15:37.568322684 +0000 UTC m=+2614.294737411" Nov 21 14:15:43 crc kubenswrapper[4675]: E1121 14:15:43.355499 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice/crio-4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52\": RecentStats: unable to find data in memory cache]" Nov 21 14:15:47 crc kubenswrapper[4675]: E1121 14:15:47.187176 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice/crio-4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52\": RecentStats: unable to find data in memory cache]" Nov 21 14:15:48 crc kubenswrapper[4675]: E1121 14:15:48.156537 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice/crio-4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52\": RecentStats: unable to find data in memory cache]" Nov 21 14:15:48 crc kubenswrapper[4675]: E1121 14:15:48.159277 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice/crio-4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52\": RecentStats: unable to find data in memory cache]" Nov 21 14:15:50 crc kubenswrapper[4675]: I1121 14:15:50.849994 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:15:51 crc kubenswrapper[4675]: I1121 14:15:51.694022 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"fd06f4194ba69a1a4c4840b13f7ee009236c204361580f9b8e197db00af3a00d"} Nov 21 14:15:57 crc kubenswrapper[4675]: E1121 14:15:57.496573 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice/crio-4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52\": RecentStats: unable to find data in memory cache]" Nov 21 14:15:58 crc kubenswrapper[4675]: E1121 14:15:58.062527 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc38a92_f126_4d9b_9500_3f98029d5cfe.slice/crio-4bab97368a927a38d8a91256702fde71dfaf85ffe555170e2eef65b0e1452f52\": RecentStats: unable to find data in memory cache]" Nov 21 14:16:25 crc kubenswrapper[4675]: I1121 14:16:25.122721 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a8108e-2cd1-42e7-9efe-5c2478adb797" containerID="ac429ada3b410158805a447c4bd32f98e5588ae2a28343ce3bbd0f5ce1d30200" exitCode=0 Nov 21 14:16:25 crc kubenswrapper[4675]: I1121 14:16:25.122799 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" event={"ID":"50a8108e-2cd1-42e7-9efe-5c2478adb797","Type":"ContainerDied","Data":"ac429ada3b410158805a447c4bd32f98e5588ae2a28343ce3bbd0f5ce1d30200"} Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.649912 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.794902 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq9rp\" (UniqueName: \"kubernetes.io/projected/50a8108e-2cd1-42e7-9efe-5c2478adb797-kube-api-access-cq9rp\") pod \"50a8108e-2cd1-42e7-9efe-5c2478adb797\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.795144 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-neutron-metadata-combined-ca-bundle\") pod \"50a8108e-2cd1-42e7-9efe-5c2478adb797\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.795189 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-inventory\") pod \"50a8108e-2cd1-42e7-9efe-5c2478adb797\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.795282 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-neutron-ovn-metadata-agent-neutron-config-0\") pod \"50a8108e-2cd1-42e7-9efe-5c2478adb797\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.795338 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-ssh-key\") pod \"50a8108e-2cd1-42e7-9efe-5c2478adb797\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.795424 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-nova-metadata-neutron-config-0\") pod \"50a8108e-2cd1-42e7-9efe-5c2478adb797\" (UID: \"50a8108e-2cd1-42e7-9efe-5c2478adb797\") " Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.801285 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "50a8108e-2cd1-42e7-9efe-5c2478adb797" (UID: "50a8108e-2cd1-42e7-9efe-5c2478adb797"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.801293 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a8108e-2cd1-42e7-9efe-5c2478adb797-kube-api-access-cq9rp" (OuterVolumeSpecName: "kube-api-access-cq9rp") pod "50a8108e-2cd1-42e7-9efe-5c2478adb797" (UID: "50a8108e-2cd1-42e7-9efe-5c2478adb797"). InnerVolumeSpecName "kube-api-access-cq9rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.828238 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "50a8108e-2cd1-42e7-9efe-5c2478adb797" (UID: "50a8108e-2cd1-42e7-9efe-5c2478adb797"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.831810 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-inventory" (OuterVolumeSpecName: "inventory") pod "50a8108e-2cd1-42e7-9efe-5c2478adb797" (UID: "50a8108e-2cd1-42e7-9efe-5c2478adb797"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.833268 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "50a8108e-2cd1-42e7-9efe-5c2478adb797" (UID: "50a8108e-2cd1-42e7-9efe-5c2478adb797"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.851808 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "50a8108e-2cd1-42e7-9efe-5c2478adb797" (UID: "50a8108e-2cd1-42e7-9efe-5c2478adb797"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.898215 4675 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.898274 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.898290 4675 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.898304 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.898315 4675 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/50a8108e-2cd1-42e7-9efe-5c2478adb797-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:16:26 crc kubenswrapper[4675]: I1121 14:16:26.898364 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq9rp\" (UniqueName: \"kubernetes.io/projected/50a8108e-2cd1-42e7-9efe-5c2478adb797-kube-api-access-cq9rp\") on node \"crc\" DevicePath \"\"" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.148140 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" event={"ID":"50a8108e-2cd1-42e7-9efe-5c2478adb797","Type":"ContainerDied","Data":"d5bc6c0d91789977ef1585117c15a203b36ea8f1e2a77869db6a18c29c46f8df"} Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.148178 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5bc6c0d91789977ef1585117c15a203b36ea8f1e2a77869db6a18c29c46f8df" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.148468 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.237930 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh"] Nov 21 14:16:27 crc kubenswrapper[4675]: E1121 14:16:27.238904 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a8108e-2cd1-42e7-9efe-5c2478adb797" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.238934 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a8108e-2cd1-42e7-9efe-5c2478adb797" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.239313 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a8108e-2cd1-42e7-9efe-5c2478adb797" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.240519 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.242667 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.242790 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.243618 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.243631 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.245187 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.252567 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh"] Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.307610 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.307688 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.307995 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.308098 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.308367 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24swm\" (UniqueName: \"kubernetes.io/projected/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-kube-api-access-24swm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.410590 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24swm\" (UniqueName: \"kubernetes.io/projected/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-kube-api-access-24swm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.410734 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.410778 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.410866 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.410900 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.415353 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.415819 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.415930 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.416299 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.430736 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24swm\" (UniqueName: \"kubernetes.io/projected/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-kube-api-access-24swm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:27 crc kubenswrapper[4675]: I1121 14:16:27.565937 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:16:28 crc kubenswrapper[4675]: I1121 14:16:28.216338 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh"] Nov 21 14:16:29 crc kubenswrapper[4675]: I1121 14:16:29.172058 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" event={"ID":"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687","Type":"ContainerStarted","Data":"85fbdf08200c475331c541f0d7c8897ab9da6a577ded73b147e1edf3df24f8c3"} Nov 21 14:16:29 crc kubenswrapper[4675]: I1121 14:16:29.172615 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" event={"ID":"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687","Type":"ContainerStarted","Data":"8469bdd6c2527215e6ab5a359ca9eff12f64e6a2ead0ab75168c64579c250246"} Nov 21 14:16:29 crc kubenswrapper[4675]: I1121 14:16:29.189206 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" podStartSLOduration=1.687759787 podStartE2EDuration="2.189183959s" podCreationTimestamp="2025-11-21 14:16:27 +0000 UTC" firstStartedPulling="2025-11-21 14:16:28.231987636 +0000 UTC m=+2664.958402363" lastFinishedPulling="2025-11-21 14:16:28.733411808 +0000 UTC m=+2665.459826535" observedRunningTime="2025-11-21 14:16:29.185924907 +0000 UTC m=+2665.912339644" watchObservedRunningTime="2025-11-21 14:16:29.189183959 +0000 UTC m=+2665.915598706" Nov 21 14:17:26 crc kubenswrapper[4675]: I1121 14:17:26.175785 4675 scope.go:117] "RemoveContainer" containerID="a81f4ae4de9e75771607f34fb12f2dad74f6548f5b6fa8d081f0e007fea5cca1" Nov 21 14:17:26 crc kubenswrapper[4675]: I1121 14:17:26.208055 4675 scope.go:117] "RemoveContainer" containerID="1228b26b012ffd3ec3bee68edeef11f68bcff3efc5aa9a1f07167e54b74b015f" Nov 21 14:17:26 crc kubenswrapper[4675]: I1121 14:17:26.229764 4675 scope.go:117] "RemoveContainer" containerID="1777e0160dc6031c9789f64acbe89aad10a2498938f56d03e320131a3b6bebc5" Nov 21 14:18:16 crc kubenswrapper[4675]: I1121 14:18:16.136641 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:18:16 crc kubenswrapper[4675]: I1121 14:18:16.137315 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:18:46 crc kubenswrapper[4675]: I1121 14:18:46.136856 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:18:46 crc kubenswrapper[4675]: I1121 14:18:46.137869 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:19:01 crc kubenswrapper[4675]: I1121 14:19:01.100363 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5dwj6"] Nov 21 14:19:01 crc kubenswrapper[4675]: I1121 14:19:01.103336 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:01 crc kubenswrapper[4675]: I1121 14:19:01.114511 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dwj6"] Nov 21 14:19:01 crc kubenswrapper[4675]: I1121 14:19:01.277783 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-utilities\") pod \"redhat-marketplace-5dwj6\" (UID: \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\") " pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:01 crc kubenswrapper[4675]: I1121 14:19:01.277889 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-catalog-content\") pod \"redhat-marketplace-5dwj6\" (UID: \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\") " pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:01 crc kubenswrapper[4675]: I1121 14:19:01.278430 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcv6z\" (UniqueName: \"kubernetes.io/projected/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-kube-api-access-pcv6z\") pod \"redhat-marketplace-5dwj6\" (UID: \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\") " pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:01 crc kubenswrapper[4675]: I1121 14:19:01.380319 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-catalog-content\") pod \"redhat-marketplace-5dwj6\" (UID: \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\") " pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:01 crc kubenswrapper[4675]: I1121 14:19:01.380563 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcv6z\" (UniqueName: \"kubernetes.io/projected/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-kube-api-access-pcv6z\") pod \"redhat-marketplace-5dwj6\" (UID: \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\") " pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:01 crc kubenswrapper[4675]: I1121 14:19:01.380636 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-utilities\") pod \"redhat-marketplace-5dwj6\" (UID: \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\") " pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:01 crc kubenswrapper[4675]: I1121 14:19:01.381004 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-catalog-content\") pod \"redhat-marketplace-5dwj6\" (UID: \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\") " pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:01 crc kubenswrapper[4675]: I1121 14:19:01.381054 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-utilities\") pod \"redhat-marketplace-5dwj6\" (UID: \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\") " pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:01 crc kubenswrapper[4675]: I1121 14:19:01.401198 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcv6z\" (UniqueName: \"kubernetes.io/projected/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-kube-api-access-pcv6z\") pod \"redhat-marketplace-5dwj6\" (UID: \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\") " pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:01 crc kubenswrapper[4675]: I1121 14:19:01.444659 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:01 crc kubenswrapper[4675]: I1121 14:19:01.941021 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dwj6"] Nov 21 14:19:02 crc kubenswrapper[4675]: I1121 14:19:02.892007 4675 generic.go:334] "Generic (PLEG): container finished" podID="b2185c4a-a2f9-4479-91bb-799d8a9b6e53" containerID="1994baea513f5fcff714ee4a1eb530ba68a8489ea65e08f704f97074718613d1" exitCode=0 Nov 21 14:19:02 crc kubenswrapper[4675]: I1121 14:19:02.892791 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dwj6" event={"ID":"b2185c4a-a2f9-4479-91bb-799d8a9b6e53","Type":"ContainerDied","Data":"1994baea513f5fcff714ee4a1eb530ba68a8489ea65e08f704f97074718613d1"} Nov 21 14:19:02 crc kubenswrapper[4675]: I1121 14:19:02.892821 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dwj6" event={"ID":"b2185c4a-a2f9-4479-91bb-799d8a9b6e53","Type":"ContainerStarted","Data":"0a671773166d22debda88a42ecd261f41d8265ce6f91335f3d818526b9711b1e"} Nov 21 14:19:02 crc kubenswrapper[4675]: I1121 14:19:02.896799 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:19:04 crc kubenswrapper[4675]: I1121 14:19:04.914512 4675 generic.go:334] "Generic (PLEG): container finished" podID="b2185c4a-a2f9-4479-91bb-799d8a9b6e53" containerID="390942acb706c31257fd8601052d7ebea18e4ac175796c1df7bd020114cf713c" exitCode=0 Nov 21 14:19:04 crc kubenswrapper[4675]: I1121 14:19:04.914808 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dwj6" event={"ID":"b2185c4a-a2f9-4479-91bb-799d8a9b6e53","Type":"ContainerDied","Data":"390942acb706c31257fd8601052d7ebea18e4ac175796c1df7bd020114cf713c"} Nov 21 14:19:05 crc kubenswrapper[4675]: I1121 14:19:05.929019 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dwj6" event={"ID":"b2185c4a-a2f9-4479-91bb-799d8a9b6e53","Type":"ContainerStarted","Data":"52abe49db631b4cc86ea58b5fe3b51323d7b4adb52271255b2d67497f0d044d2"} Nov 21 14:19:05 crc kubenswrapper[4675]: I1121 14:19:05.945556 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5dwj6" podStartSLOduration=2.515487522 podStartE2EDuration="4.94553926s" podCreationTimestamp="2025-11-21 14:19:01 +0000 UTC" firstStartedPulling="2025-11-21 14:19:02.895975452 +0000 UTC m=+2819.622390179" lastFinishedPulling="2025-11-21 14:19:05.32602719 +0000 UTC m=+2822.052441917" observedRunningTime="2025-11-21 14:19:05.944242287 +0000 UTC m=+2822.670657034" watchObservedRunningTime="2025-11-21 14:19:05.94553926 +0000 UTC m=+2822.671953987" Nov 21 14:19:11 crc kubenswrapper[4675]: I1121 14:19:11.445342 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:11 crc kubenswrapper[4675]: I1121 14:19:11.445942 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:11 crc kubenswrapper[4675]: I1121 14:19:11.512109 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:12 crc kubenswrapper[4675]: I1121 14:19:12.039648 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:12 crc kubenswrapper[4675]: I1121 14:19:12.082803 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dwj6"] Nov 21 14:19:14 crc kubenswrapper[4675]: I1121 14:19:14.010931 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5dwj6" podUID="b2185c4a-a2f9-4479-91bb-799d8a9b6e53" containerName="registry-server" containerID="cri-o://52abe49db631b4cc86ea58b5fe3b51323d7b4adb52271255b2d67497f0d044d2" gracePeriod=2 Nov 21 14:19:14 crc kubenswrapper[4675]: I1121 14:19:14.512376 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:14 crc kubenswrapper[4675]: I1121 14:19:14.620958 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-catalog-content\") pod \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\" (UID: \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\") " Nov 21 14:19:14 crc kubenswrapper[4675]: I1121 14:19:14.621078 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-utilities\") pod \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\" (UID: \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\") " Nov 21 14:19:14 crc kubenswrapper[4675]: I1121 14:19:14.621419 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcv6z\" (UniqueName: \"kubernetes.io/projected/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-kube-api-access-pcv6z\") pod \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\" (UID: \"b2185c4a-a2f9-4479-91bb-799d8a9b6e53\") " Nov 21 14:19:14 crc kubenswrapper[4675]: I1121 14:19:14.621795 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-utilities" (OuterVolumeSpecName: "utilities") pod "b2185c4a-a2f9-4479-91bb-799d8a9b6e53" (UID: "b2185c4a-a2f9-4479-91bb-799d8a9b6e53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:19:14 crc kubenswrapper[4675]: I1121 14:19:14.622522 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:19:14 crc kubenswrapper[4675]: I1121 14:19:14.626905 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-kube-api-access-pcv6z" (OuterVolumeSpecName: "kube-api-access-pcv6z") pod "b2185c4a-a2f9-4479-91bb-799d8a9b6e53" (UID: "b2185c4a-a2f9-4479-91bb-799d8a9b6e53"). InnerVolumeSpecName "kube-api-access-pcv6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:19:14 crc kubenswrapper[4675]: I1121 14:19:14.640248 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2185c4a-a2f9-4479-91bb-799d8a9b6e53" (UID: "b2185c4a-a2f9-4479-91bb-799d8a9b6e53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:19:14 crc kubenswrapper[4675]: I1121 14:19:14.724804 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcv6z\" (UniqueName: \"kubernetes.io/projected/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-kube-api-access-pcv6z\") on node \"crc\" DevicePath \"\"" Nov 21 14:19:14 crc kubenswrapper[4675]: I1121 14:19:14.724846 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2185c4a-a2f9-4479-91bb-799d8a9b6e53-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.025228 4675 generic.go:334] "Generic (PLEG): container finished" podID="b2185c4a-a2f9-4479-91bb-799d8a9b6e53" containerID="52abe49db631b4cc86ea58b5fe3b51323d7b4adb52271255b2d67497f0d044d2" exitCode=0 Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.025296 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dwj6" Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.025281 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dwj6" event={"ID":"b2185c4a-a2f9-4479-91bb-799d8a9b6e53","Type":"ContainerDied","Data":"52abe49db631b4cc86ea58b5fe3b51323d7b4adb52271255b2d67497f0d044d2"} Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.025364 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dwj6" event={"ID":"b2185c4a-a2f9-4479-91bb-799d8a9b6e53","Type":"ContainerDied","Data":"0a671773166d22debda88a42ecd261f41d8265ce6f91335f3d818526b9711b1e"} Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.025388 4675 scope.go:117] "RemoveContainer" containerID="52abe49db631b4cc86ea58b5fe3b51323d7b4adb52271255b2d67497f0d044d2" Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.052396 4675 scope.go:117] "RemoveContainer" containerID="390942acb706c31257fd8601052d7ebea18e4ac175796c1df7bd020114cf713c" Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.054104 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dwj6"] Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.076746 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dwj6"] Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.081076 4675 scope.go:117] "RemoveContainer" containerID="1994baea513f5fcff714ee4a1eb530ba68a8489ea65e08f704f97074718613d1" Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.129666 4675 scope.go:117] "RemoveContainer" containerID="52abe49db631b4cc86ea58b5fe3b51323d7b4adb52271255b2d67497f0d044d2" Nov 21 14:19:15 crc kubenswrapper[4675]: E1121 14:19:15.130097 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52abe49db631b4cc86ea58b5fe3b51323d7b4adb52271255b2d67497f0d044d2\": container with ID starting with 52abe49db631b4cc86ea58b5fe3b51323d7b4adb52271255b2d67497f0d044d2 not found: ID does not exist" containerID="52abe49db631b4cc86ea58b5fe3b51323d7b4adb52271255b2d67497f0d044d2" Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.130122 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52abe49db631b4cc86ea58b5fe3b51323d7b4adb52271255b2d67497f0d044d2"} err="failed to get container status \"52abe49db631b4cc86ea58b5fe3b51323d7b4adb52271255b2d67497f0d044d2\": rpc error: code = NotFound desc = could not find container \"52abe49db631b4cc86ea58b5fe3b51323d7b4adb52271255b2d67497f0d044d2\": container with ID starting with 52abe49db631b4cc86ea58b5fe3b51323d7b4adb52271255b2d67497f0d044d2 not found: ID does not exist" Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.130143 4675 scope.go:117] "RemoveContainer" containerID="390942acb706c31257fd8601052d7ebea18e4ac175796c1df7bd020114cf713c" Nov 21 14:19:15 crc kubenswrapper[4675]: E1121 14:19:15.130425 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"390942acb706c31257fd8601052d7ebea18e4ac175796c1df7bd020114cf713c\": container with ID starting with 390942acb706c31257fd8601052d7ebea18e4ac175796c1df7bd020114cf713c not found: ID does not exist" containerID="390942acb706c31257fd8601052d7ebea18e4ac175796c1df7bd020114cf713c" Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.130455 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390942acb706c31257fd8601052d7ebea18e4ac175796c1df7bd020114cf713c"} err="failed to get container status \"390942acb706c31257fd8601052d7ebea18e4ac175796c1df7bd020114cf713c\": rpc error: code = NotFound desc = could not find container \"390942acb706c31257fd8601052d7ebea18e4ac175796c1df7bd020114cf713c\": container with ID starting with 390942acb706c31257fd8601052d7ebea18e4ac175796c1df7bd020114cf713c not found: ID does not exist" Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.130477 4675 scope.go:117] "RemoveContainer" containerID="1994baea513f5fcff714ee4a1eb530ba68a8489ea65e08f704f97074718613d1" Nov 21 14:19:15 crc kubenswrapper[4675]: E1121 14:19:15.130790 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1994baea513f5fcff714ee4a1eb530ba68a8489ea65e08f704f97074718613d1\": container with ID starting with 1994baea513f5fcff714ee4a1eb530ba68a8489ea65e08f704f97074718613d1 not found: ID does not exist" containerID="1994baea513f5fcff714ee4a1eb530ba68a8489ea65e08f704f97074718613d1" Nov 21 14:19:15 crc kubenswrapper[4675]: I1121 14:19:15.130811 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1994baea513f5fcff714ee4a1eb530ba68a8489ea65e08f704f97074718613d1"} err="failed to get container status \"1994baea513f5fcff714ee4a1eb530ba68a8489ea65e08f704f97074718613d1\": rpc error: code = NotFound desc = could not find container \"1994baea513f5fcff714ee4a1eb530ba68a8489ea65e08f704f97074718613d1\": container with ID starting with 1994baea513f5fcff714ee4a1eb530ba68a8489ea65e08f704f97074718613d1 not found: ID does not exist" Nov 21 14:19:16 crc kubenswrapper[4675]: I1121 14:19:16.136275 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:19:16 crc kubenswrapper[4675]: I1121 14:19:16.136566 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:19:16 crc kubenswrapper[4675]: I1121 14:19:16.136658 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 14:19:16 crc kubenswrapper[4675]: I1121 14:19:16.137601 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd06f4194ba69a1a4c4840b13f7ee009236c204361580f9b8e197db00af3a00d"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:19:16 crc kubenswrapper[4675]: I1121 14:19:16.137673 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://fd06f4194ba69a1a4c4840b13f7ee009236c204361580f9b8e197db00af3a00d" gracePeriod=600 Nov 21 14:19:16 crc kubenswrapper[4675]: I1121 14:19:16.864735 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2185c4a-a2f9-4479-91bb-799d8a9b6e53" path="/var/lib/kubelet/pods/b2185c4a-a2f9-4479-91bb-799d8a9b6e53/volumes" Nov 21 14:19:17 crc kubenswrapper[4675]: I1121 14:19:17.069451 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="fd06f4194ba69a1a4c4840b13f7ee009236c204361580f9b8e197db00af3a00d" exitCode=0 Nov 21 14:19:17 crc kubenswrapper[4675]: I1121 14:19:17.069490 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"fd06f4194ba69a1a4c4840b13f7ee009236c204361580f9b8e197db00af3a00d"} Nov 21 14:19:17 crc kubenswrapper[4675]: I1121 14:19:17.069724 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399"} Nov 21 14:19:17 crc kubenswrapper[4675]: I1121 14:19:17.069747 4675 scope.go:117] "RemoveContainer" containerID="4f30d5c534bbb5e274d02ef672cb63f9ea8daa75cdc60abd2d4d3fdb42b7427b" Nov 21 14:20:24 crc kubenswrapper[4675]: I1121 14:20:24.889457 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g4wzp"] Nov 21 14:20:24 crc kubenswrapper[4675]: E1121 14:20:24.890487 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2185c4a-a2f9-4479-91bb-799d8a9b6e53" containerName="registry-server" Nov 21 14:20:24 crc kubenswrapper[4675]: I1121 14:20:24.890500 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2185c4a-a2f9-4479-91bb-799d8a9b6e53" containerName="registry-server" Nov 21 14:20:24 crc kubenswrapper[4675]: E1121 14:20:24.890517 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2185c4a-a2f9-4479-91bb-799d8a9b6e53" containerName="extract-utilities" Nov 21 14:20:24 crc kubenswrapper[4675]: I1121 14:20:24.890523 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2185c4a-a2f9-4479-91bb-799d8a9b6e53" containerName="extract-utilities" Nov 21 14:20:24 crc kubenswrapper[4675]: E1121 14:20:24.890556 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2185c4a-a2f9-4479-91bb-799d8a9b6e53" containerName="extract-content" Nov 21 14:20:24 crc kubenswrapper[4675]: I1121 14:20:24.890562 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2185c4a-a2f9-4479-91bb-799d8a9b6e53" containerName="extract-content" Nov 21 14:20:24 crc kubenswrapper[4675]: I1121 14:20:24.890773 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2185c4a-a2f9-4479-91bb-799d8a9b6e53" containerName="registry-server" Nov 21 14:20:24 crc kubenswrapper[4675]: I1121 14:20:24.892667 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:24 crc kubenswrapper[4675]: I1121 14:20:24.922838 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g4wzp"] Nov 21 14:20:24 crc kubenswrapper[4675]: I1121 14:20:24.968640 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj92j\" (UniqueName: \"kubernetes.io/projected/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-kube-api-access-qj92j\") pod \"community-operators-g4wzp\" (UID: \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\") " pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:24 crc kubenswrapper[4675]: I1121 14:20:24.968969 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-catalog-content\") pod \"community-operators-g4wzp\" (UID: \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\") " pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:24 crc kubenswrapper[4675]: I1121 14:20:24.969050 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-utilities\") pod \"community-operators-g4wzp\" (UID: \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\") " pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:25 crc kubenswrapper[4675]: I1121 14:20:25.071371 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj92j\" (UniqueName: \"kubernetes.io/projected/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-kube-api-access-qj92j\") pod \"community-operators-g4wzp\" (UID: \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\") " pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:25 crc kubenswrapper[4675]: I1121 14:20:25.071589 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-catalog-content\") pod \"community-operators-g4wzp\" (UID: \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\") " pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:25 crc kubenswrapper[4675]: I1121 14:20:25.071635 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-utilities\") pod \"community-operators-g4wzp\" (UID: \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\") " pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:25 crc kubenswrapper[4675]: I1121 14:20:25.072339 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-utilities\") pod \"community-operators-g4wzp\" (UID: \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\") " pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:25 crc kubenswrapper[4675]: I1121 14:20:25.072575 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-catalog-content\") pod \"community-operators-g4wzp\" (UID: \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\") " pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:25 crc kubenswrapper[4675]: I1121 14:20:25.102917 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj92j\" (UniqueName: \"kubernetes.io/projected/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-kube-api-access-qj92j\") pod \"community-operators-g4wzp\" (UID: \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\") " pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:25 crc kubenswrapper[4675]: I1121 14:20:25.215109 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:25 crc kubenswrapper[4675]: I1121 14:20:25.868935 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g4wzp"] Nov 21 14:20:26 crc kubenswrapper[4675]: I1121 14:20:26.818261 4675 generic.go:334] "Generic (PLEG): container finished" podID="1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" containerID="f2e06285ce639e9d8e77dff244ba4699e9eb0f4857a71b70cc142ba45bc6acc0" exitCode=0 Nov 21 14:20:26 crc kubenswrapper[4675]: I1121 14:20:26.818353 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4wzp" event={"ID":"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e","Type":"ContainerDied","Data":"f2e06285ce639e9d8e77dff244ba4699e9eb0f4857a71b70cc142ba45bc6acc0"} Nov 21 14:20:26 crc kubenswrapper[4675]: I1121 14:20:26.818725 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4wzp" event={"ID":"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e","Type":"ContainerStarted","Data":"fa24b55dcc2730549598b4ba33c4e2fd39a9be03f8379967a4813086cbc7ae4e"} Nov 21 14:20:27 crc kubenswrapper[4675]: I1121 14:20:27.279667 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g99dr"] Nov 21 14:20:27 crc kubenswrapper[4675]: I1121 14:20:27.282580 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:27 crc kubenswrapper[4675]: I1121 14:20:27.295774 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g99dr"] Nov 21 14:20:27 crc kubenswrapper[4675]: I1121 14:20:27.428695 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c8b264-a66b-4959-b4db-3085aa8949b3-catalog-content\") pod \"certified-operators-g99dr\" (UID: \"06c8b264-a66b-4959-b4db-3085aa8949b3\") " pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:27 crc kubenswrapper[4675]: I1121 14:20:27.428884 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c8b264-a66b-4959-b4db-3085aa8949b3-utilities\") pod \"certified-operators-g99dr\" (UID: \"06c8b264-a66b-4959-b4db-3085aa8949b3\") " pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:27 crc kubenswrapper[4675]: I1121 14:20:27.429160 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px6r6\" (UniqueName: \"kubernetes.io/projected/06c8b264-a66b-4959-b4db-3085aa8949b3-kube-api-access-px6r6\") pod \"certified-operators-g99dr\" (UID: \"06c8b264-a66b-4959-b4db-3085aa8949b3\") " pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:27 crc kubenswrapper[4675]: I1121 14:20:27.531162 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c8b264-a66b-4959-b4db-3085aa8949b3-catalog-content\") pod \"certified-operators-g99dr\" (UID: \"06c8b264-a66b-4959-b4db-3085aa8949b3\") " pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:27 crc kubenswrapper[4675]: I1121 14:20:27.531479 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c8b264-a66b-4959-b4db-3085aa8949b3-utilities\") pod \"certified-operators-g99dr\" (UID: \"06c8b264-a66b-4959-b4db-3085aa8949b3\") " pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:27 crc kubenswrapper[4675]: I1121 14:20:27.531557 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px6r6\" (UniqueName: \"kubernetes.io/projected/06c8b264-a66b-4959-b4db-3085aa8949b3-kube-api-access-px6r6\") pod \"certified-operators-g99dr\" (UID: \"06c8b264-a66b-4959-b4db-3085aa8949b3\") " pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:27 crc kubenswrapper[4675]: I1121 14:20:27.531656 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c8b264-a66b-4959-b4db-3085aa8949b3-catalog-content\") pod \"certified-operators-g99dr\" (UID: \"06c8b264-a66b-4959-b4db-3085aa8949b3\") " pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:27 crc kubenswrapper[4675]: I1121 14:20:27.531953 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c8b264-a66b-4959-b4db-3085aa8949b3-utilities\") pod \"certified-operators-g99dr\" (UID: \"06c8b264-a66b-4959-b4db-3085aa8949b3\") " pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:27 crc kubenswrapper[4675]: I1121 14:20:27.552921 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px6r6\" (UniqueName: \"kubernetes.io/projected/06c8b264-a66b-4959-b4db-3085aa8949b3-kube-api-access-px6r6\") pod \"certified-operators-g99dr\" (UID: \"06c8b264-a66b-4959-b4db-3085aa8949b3\") " pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:27 crc kubenswrapper[4675]: I1121 14:20:27.604577 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:28 crc kubenswrapper[4675]: I1121 14:20:28.243022 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g99dr"] Nov 21 14:20:28 crc kubenswrapper[4675]: I1121 14:20:28.865680 4675 generic.go:334] "Generic (PLEG): container finished" podID="06c8b264-a66b-4959-b4db-3085aa8949b3" containerID="091009ec1a35b1758d97a1ae46c9322217c5203e0714dcdac1e947a35d60fca4" exitCode=0 Nov 21 14:20:28 crc kubenswrapper[4675]: I1121 14:20:28.877572 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4wzp" event={"ID":"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e","Type":"ContainerStarted","Data":"0535073d27134c3b6940f34536ee8d4e7508e8be1970ec29467c9b08d6fa5a3f"} Nov 21 14:20:28 crc kubenswrapper[4675]: I1121 14:20:28.877631 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g99dr" event={"ID":"06c8b264-a66b-4959-b4db-3085aa8949b3","Type":"ContainerDied","Data":"091009ec1a35b1758d97a1ae46c9322217c5203e0714dcdac1e947a35d60fca4"} Nov 21 14:20:28 crc kubenswrapper[4675]: I1121 14:20:28.877652 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g99dr" event={"ID":"06c8b264-a66b-4959-b4db-3085aa8949b3","Type":"ContainerStarted","Data":"76c4516be781e3017ad8c8da3498cd67bc50fee41ea480780b23272adeceaeba"} Nov 21 14:20:32 crc kubenswrapper[4675]: I1121 14:20:32.904405 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g99dr" event={"ID":"06c8b264-a66b-4959-b4db-3085aa8949b3","Type":"ContainerStarted","Data":"a739216cf0edd350d1e6d40b09a4924422bfe39c05a63adb55535a4e2d3a54f4"} Nov 21 14:20:33 crc kubenswrapper[4675]: I1121 14:20:33.919483 4675 generic.go:334] "Generic (PLEG): container finished" podID="1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" containerID="0535073d27134c3b6940f34536ee8d4e7508e8be1970ec29467c9b08d6fa5a3f" exitCode=0 Nov 21 14:20:33 crc kubenswrapper[4675]: I1121 14:20:33.919565 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4wzp" event={"ID":"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e","Type":"ContainerDied","Data":"0535073d27134c3b6940f34536ee8d4e7508e8be1970ec29467c9b08d6fa5a3f"} Nov 21 14:20:34 crc kubenswrapper[4675]: I1121 14:20:34.933729 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4wzp" event={"ID":"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e","Type":"ContainerStarted","Data":"70ae8b65a03adf4d7431aa1760a50cf5ade32663e5d40144f5f90da3bde2db5f"} Nov 21 14:20:34 crc kubenswrapper[4675]: I1121 14:20:34.961153 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g4wzp" podStartSLOduration=3.456889653 podStartE2EDuration="10.961126769s" podCreationTimestamp="2025-11-21 14:20:24 +0000 UTC" firstStartedPulling="2025-11-21 14:20:26.820319279 +0000 UTC m=+2903.546734006" lastFinishedPulling="2025-11-21 14:20:34.324556395 +0000 UTC m=+2911.050971122" observedRunningTime="2025-11-21 14:20:34.954878691 +0000 UTC m=+2911.681293418" watchObservedRunningTime="2025-11-21 14:20:34.961126769 +0000 UTC m=+2911.687541496" Nov 21 14:20:35 crc kubenswrapper[4675]: I1121 14:20:35.215730 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:35 crc kubenswrapper[4675]: I1121 14:20:35.215801 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:35 crc kubenswrapper[4675]: I1121 14:20:35.945550 4675 generic.go:334] "Generic (PLEG): container finished" podID="06c8b264-a66b-4959-b4db-3085aa8949b3" containerID="a739216cf0edd350d1e6d40b09a4924422bfe39c05a63adb55535a4e2d3a54f4" exitCode=0 Nov 21 14:20:35 crc kubenswrapper[4675]: I1121 14:20:35.945625 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g99dr" event={"ID":"06c8b264-a66b-4959-b4db-3085aa8949b3","Type":"ContainerDied","Data":"a739216cf0edd350d1e6d40b09a4924422bfe39c05a63adb55535a4e2d3a54f4"} Nov 21 14:20:36 crc kubenswrapper[4675]: I1121 14:20:36.264387 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-g4wzp" podUID="1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" containerName="registry-server" probeResult="failure" output=< Nov 21 14:20:36 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:20:36 crc kubenswrapper[4675]: > Nov 21 14:20:36 crc kubenswrapper[4675]: I1121 14:20:36.959760 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g99dr" event={"ID":"06c8b264-a66b-4959-b4db-3085aa8949b3","Type":"ContainerStarted","Data":"ff3e218cbdf0fa0ab27762866acda2067301f4b11855711d7e1f925b7742a6b9"} Nov 21 14:20:37 crc kubenswrapper[4675]: I1121 14:20:37.605245 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:37 crc kubenswrapper[4675]: I1121 14:20:37.605659 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:38 crc kubenswrapper[4675]: I1121 14:20:38.686727 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g99dr" podUID="06c8b264-a66b-4959-b4db-3085aa8949b3" containerName="registry-server" probeResult="failure" output=< Nov 21 14:20:38 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:20:38 crc kubenswrapper[4675]: > Nov 21 14:20:39 crc kubenswrapper[4675]: I1121 14:20:39.992718 4675 generic.go:334] "Generic (PLEG): container finished" podID="31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687" containerID="85fbdf08200c475331c541f0d7c8897ab9da6a577ded73b147e1edf3df24f8c3" exitCode=0 Nov 21 14:20:39 crc kubenswrapper[4675]: I1121 14:20:39.992827 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" event={"ID":"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687","Type":"ContainerDied","Data":"85fbdf08200c475331c541f0d7c8897ab9da6a577ded73b147e1edf3df24f8c3"} Nov 21 14:20:40 crc kubenswrapper[4675]: I1121 14:20:40.018602 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g99dr" podStartSLOduration=5.34138293 podStartE2EDuration="13.018572113s" podCreationTimestamp="2025-11-21 14:20:27 +0000 UTC" firstStartedPulling="2025-11-21 14:20:28.872916839 +0000 UTC m=+2905.599331566" lastFinishedPulling="2025-11-21 14:20:36.550106022 +0000 UTC m=+2913.276520749" observedRunningTime="2025-11-21 14:20:36.986014409 +0000 UTC m=+2913.712429136" watchObservedRunningTime="2025-11-21 14:20:40.018572113 +0000 UTC m=+2916.744986840" Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.508853 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.605235 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-inventory\") pod \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.605295 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-ssh-key\") pod \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.605442 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24swm\" (UniqueName: \"kubernetes.io/projected/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-kube-api-access-24swm\") pod \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.605827 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-libvirt-combined-ca-bundle\") pod \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.605860 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-libvirt-secret-0\") pod \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\" (UID: \"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687\") " Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.629957 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687" (UID: "31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.635819 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-kube-api-access-24swm" (OuterVolumeSpecName: "kube-api-access-24swm") pod "31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687" (UID: "31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687"). InnerVolumeSpecName "kube-api-access-24swm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.657265 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-inventory" (OuterVolumeSpecName: "inventory") pod "31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687" (UID: "31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.673527 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687" (UID: "31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.677293 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687" (UID: "31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.709248 4675 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.709285 4675 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.709295 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.709305 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:41 crc kubenswrapper[4675]: I1121 14:20:41.709316 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24swm\" (UniqueName: \"kubernetes.io/projected/31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687-kube-api-access-24swm\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.014052 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" event={"ID":"31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687","Type":"ContainerDied","Data":"8469bdd6c2527215e6ab5a359ca9eff12f64e6a2ead0ab75168c64579c250246"} Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.014113 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8469bdd6c2527215e6ab5a359ca9eff12f64e6a2ead0ab75168c64579c250246" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.014139 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.140145 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78"] Nov 21 14:20:42 crc kubenswrapper[4675]: E1121 14:20:42.140818 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.140840 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.141214 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.142366 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.153162 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.153540 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.153695 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.153832 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.153962 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.154343 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.154374 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.159119 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78"] Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.235504 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.235547 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.235571 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.235594 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.235690 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.235711 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.235734 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.235796 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.235866 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25fdn\" (UniqueName: \"kubernetes.io/projected/2205f0b5-339c-4165-84fd-9c9f117d757f-kube-api-access-25fdn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.338641 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25fdn\" (UniqueName: \"kubernetes.io/projected/2205f0b5-339c-4165-84fd-9c9f117d757f-kube-api-access-25fdn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.338844 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.338887 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.338921 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.338956 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.339516 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.339550 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.339579 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.339619 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.340524 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.343753 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.344976 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.347149 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.348393 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.360524 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.369651 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.374899 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25fdn\" (UniqueName: \"kubernetes.io/projected/2205f0b5-339c-4165-84fd-9c9f117d757f-kube-api-access-25fdn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.376656 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vlb78\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:42 crc kubenswrapper[4675]: I1121 14:20:42.491247 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:20:43 crc kubenswrapper[4675]: I1121 14:20:43.093002 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78"] Nov 21 14:20:44 crc kubenswrapper[4675]: I1121 14:20:44.035379 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" event={"ID":"2205f0b5-339c-4165-84fd-9c9f117d757f","Type":"ContainerStarted","Data":"882a9389b4ced5843e0ea4ee99efd6867cea64264815bc6bb025717e051967fe"} Nov 21 14:20:44 crc kubenswrapper[4675]: I1121 14:20:44.035763 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" event={"ID":"2205f0b5-339c-4165-84fd-9c9f117d757f","Type":"ContainerStarted","Data":"2628ca45f5870f78098580bb81bb78bbfb91f2bb5b6770058fd4e6ca58736aee"} Nov 21 14:20:44 crc kubenswrapper[4675]: I1121 14:20:44.053646 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" podStartSLOduration=1.636066486 podStartE2EDuration="2.05362626s" podCreationTimestamp="2025-11-21 14:20:42 +0000 UTC" firstStartedPulling="2025-11-21 14:20:43.095356063 +0000 UTC m=+2919.821770790" lastFinishedPulling="2025-11-21 14:20:43.512915837 +0000 UTC m=+2920.239330564" observedRunningTime="2025-11-21 14:20:44.051576728 +0000 UTC m=+2920.777991445" watchObservedRunningTime="2025-11-21 14:20:44.05362626 +0000 UTC m=+2920.780040987" Nov 21 14:20:46 crc kubenswrapper[4675]: I1121 14:20:46.267860 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-g4wzp" podUID="1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" containerName="registry-server" probeResult="failure" output=< Nov 21 14:20:46 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:20:46 crc kubenswrapper[4675]: > Nov 21 14:20:47 crc kubenswrapper[4675]: I1121 14:20:47.665587 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:47 crc kubenswrapper[4675]: I1121 14:20:47.730531 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:47 crc kubenswrapper[4675]: I1121 14:20:47.907193 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g99dr"] Nov 21 14:20:49 crc kubenswrapper[4675]: I1121 14:20:49.105946 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g99dr" podUID="06c8b264-a66b-4959-b4db-3085aa8949b3" containerName="registry-server" containerID="cri-o://ff3e218cbdf0fa0ab27762866acda2067301f4b11855711d7e1f925b7742a6b9" gracePeriod=2 Nov 21 14:20:49 crc kubenswrapper[4675]: I1121 14:20:49.692322 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:49 crc kubenswrapper[4675]: I1121 14:20:49.828901 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px6r6\" (UniqueName: \"kubernetes.io/projected/06c8b264-a66b-4959-b4db-3085aa8949b3-kube-api-access-px6r6\") pod \"06c8b264-a66b-4959-b4db-3085aa8949b3\" (UID: \"06c8b264-a66b-4959-b4db-3085aa8949b3\") " Nov 21 14:20:49 crc kubenswrapper[4675]: I1121 14:20:49.829130 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c8b264-a66b-4959-b4db-3085aa8949b3-utilities\") pod \"06c8b264-a66b-4959-b4db-3085aa8949b3\" (UID: \"06c8b264-a66b-4959-b4db-3085aa8949b3\") " Nov 21 14:20:49 crc kubenswrapper[4675]: I1121 14:20:49.829172 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c8b264-a66b-4959-b4db-3085aa8949b3-catalog-content\") pod \"06c8b264-a66b-4959-b4db-3085aa8949b3\" (UID: \"06c8b264-a66b-4959-b4db-3085aa8949b3\") " Nov 21 14:20:49 crc kubenswrapper[4675]: I1121 14:20:49.836260 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c8b264-a66b-4959-b4db-3085aa8949b3-utilities" (OuterVolumeSpecName: "utilities") pod "06c8b264-a66b-4959-b4db-3085aa8949b3" (UID: "06c8b264-a66b-4959-b4db-3085aa8949b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:20:49 crc kubenswrapper[4675]: I1121 14:20:49.885337 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c8b264-a66b-4959-b4db-3085aa8949b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06c8b264-a66b-4959-b4db-3085aa8949b3" (UID: "06c8b264-a66b-4959-b4db-3085aa8949b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:20:49 crc kubenswrapper[4675]: I1121 14:20:49.943209 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c8b264-a66b-4959-b4db-3085aa8949b3-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:49 crc kubenswrapper[4675]: I1121 14:20:49.943420 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c8b264-a66b-4959-b4db-3085aa8949b3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.124468 4675 generic.go:334] "Generic (PLEG): container finished" podID="06c8b264-a66b-4959-b4db-3085aa8949b3" containerID="ff3e218cbdf0fa0ab27762866acda2067301f4b11855711d7e1f925b7742a6b9" exitCode=0 Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.124517 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g99dr" event={"ID":"06c8b264-a66b-4959-b4db-3085aa8949b3","Type":"ContainerDied","Data":"ff3e218cbdf0fa0ab27762866acda2067301f4b11855711d7e1f925b7742a6b9"} Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.124552 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g99dr" event={"ID":"06c8b264-a66b-4959-b4db-3085aa8949b3","Type":"ContainerDied","Data":"76c4516be781e3017ad8c8da3498cd67bc50fee41ea480780b23272adeceaeba"} Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.124573 4675 scope.go:117] "RemoveContainer" containerID="ff3e218cbdf0fa0ab27762866acda2067301f4b11855711d7e1f925b7742a6b9" Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.124768 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g99dr" Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.154952 4675 scope.go:117] "RemoveContainer" containerID="a739216cf0edd350d1e6d40b09a4924422bfe39c05a63adb55535a4e2d3a54f4" Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.391823 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c8b264-a66b-4959-b4db-3085aa8949b3-kube-api-access-px6r6" (OuterVolumeSpecName: "kube-api-access-px6r6") pod "06c8b264-a66b-4959-b4db-3085aa8949b3" (UID: "06c8b264-a66b-4959-b4db-3085aa8949b3"). InnerVolumeSpecName "kube-api-access-px6r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.410193 4675 scope.go:117] "RemoveContainer" containerID="091009ec1a35b1758d97a1ae46c9322217c5203e0714dcdac1e947a35d60fca4" Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.431151 4675 scope.go:117] "RemoveContainer" containerID="ff3e218cbdf0fa0ab27762866acda2067301f4b11855711d7e1f925b7742a6b9" Nov 21 14:20:50 crc kubenswrapper[4675]: E1121 14:20:50.431690 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff3e218cbdf0fa0ab27762866acda2067301f4b11855711d7e1f925b7742a6b9\": container with ID starting with ff3e218cbdf0fa0ab27762866acda2067301f4b11855711d7e1f925b7742a6b9 not found: ID does not exist" containerID="ff3e218cbdf0fa0ab27762866acda2067301f4b11855711d7e1f925b7742a6b9" Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.431736 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3e218cbdf0fa0ab27762866acda2067301f4b11855711d7e1f925b7742a6b9"} err="failed to get container status \"ff3e218cbdf0fa0ab27762866acda2067301f4b11855711d7e1f925b7742a6b9\": rpc error: code = NotFound desc = could not find container \"ff3e218cbdf0fa0ab27762866acda2067301f4b11855711d7e1f925b7742a6b9\": container with ID starting with ff3e218cbdf0fa0ab27762866acda2067301f4b11855711d7e1f925b7742a6b9 not found: ID does not exist" Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.431767 4675 scope.go:117] "RemoveContainer" containerID="a739216cf0edd350d1e6d40b09a4924422bfe39c05a63adb55535a4e2d3a54f4" Nov 21 14:20:50 crc kubenswrapper[4675]: E1121 14:20:50.432150 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a739216cf0edd350d1e6d40b09a4924422bfe39c05a63adb55535a4e2d3a54f4\": container with ID starting with a739216cf0edd350d1e6d40b09a4924422bfe39c05a63adb55535a4e2d3a54f4 not found: ID does not exist" containerID="a739216cf0edd350d1e6d40b09a4924422bfe39c05a63adb55535a4e2d3a54f4" Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.432183 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a739216cf0edd350d1e6d40b09a4924422bfe39c05a63adb55535a4e2d3a54f4"} err="failed to get container status \"a739216cf0edd350d1e6d40b09a4924422bfe39c05a63adb55535a4e2d3a54f4\": rpc error: code = NotFound desc = could not find container \"a739216cf0edd350d1e6d40b09a4924422bfe39c05a63adb55535a4e2d3a54f4\": container with ID starting with a739216cf0edd350d1e6d40b09a4924422bfe39c05a63adb55535a4e2d3a54f4 not found: ID does not exist" Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.432201 4675 scope.go:117] "RemoveContainer" containerID="091009ec1a35b1758d97a1ae46c9322217c5203e0714dcdac1e947a35d60fca4" Nov 21 14:20:50 crc kubenswrapper[4675]: E1121 14:20:50.432513 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"091009ec1a35b1758d97a1ae46c9322217c5203e0714dcdac1e947a35d60fca4\": container with ID starting with 091009ec1a35b1758d97a1ae46c9322217c5203e0714dcdac1e947a35d60fca4 not found: ID does not exist" containerID="091009ec1a35b1758d97a1ae46c9322217c5203e0714dcdac1e947a35d60fca4" Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.432541 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091009ec1a35b1758d97a1ae46c9322217c5203e0714dcdac1e947a35d60fca4"} err="failed to get container status \"091009ec1a35b1758d97a1ae46c9322217c5203e0714dcdac1e947a35d60fca4\": rpc error: code = NotFound desc = could not find container \"091009ec1a35b1758d97a1ae46c9322217c5203e0714dcdac1e947a35d60fca4\": container with ID starting with 091009ec1a35b1758d97a1ae46c9322217c5203e0714dcdac1e947a35d60fca4 not found: ID does not exist" Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.455678 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px6r6\" (UniqueName: \"kubernetes.io/projected/06c8b264-a66b-4959-b4db-3085aa8949b3-kube-api-access-px6r6\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.507787 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g99dr"] Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.520125 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g99dr"] Nov 21 14:20:50 crc kubenswrapper[4675]: I1121 14:20:50.863223 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06c8b264-a66b-4959-b4db-3085aa8949b3" path="/var/lib/kubelet/pods/06c8b264-a66b-4959-b4db-3085aa8949b3/volumes" Nov 21 14:20:55 crc kubenswrapper[4675]: I1121 14:20:55.271755 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:55 crc kubenswrapper[4675]: I1121 14:20:55.322137 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:56 crc kubenswrapper[4675]: I1121 14:20:56.086503 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g4wzp"] Nov 21 14:20:57 crc kubenswrapper[4675]: I1121 14:20:57.211043 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g4wzp" podUID="1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" containerName="registry-server" containerID="cri-o://70ae8b65a03adf4d7431aa1760a50cf5ade32663e5d40144f5f90da3bde2db5f" gracePeriod=2 Nov 21 14:20:57 crc kubenswrapper[4675]: I1121 14:20:57.703996 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:57 crc kubenswrapper[4675]: I1121 14:20:57.830783 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-catalog-content\") pod \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\" (UID: \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\") " Nov 21 14:20:57 crc kubenswrapper[4675]: I1121 14:20:57.831025 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj92j\" (UniqueName: \"kubernetes.io/projected/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-kube-api-access-qj92j\") pod \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\" (UID: \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\") " Nov 21 14:20:57 crc kubenswrapper[4675]: I1121 14:20:57.831057 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-utilities\") pod \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\" (UID: \"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e\") " Nov 21 14:20:57 crc kubenswrapper[4675]: I1121 14:20:57.831822 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-utilities" (OuterVolumeSpecName: "utilities") pod "1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" (UID: "1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:20:57 crc kubenswrapper[4675]: I1121 14:20:57.838335 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-kube-api-access-qj92j" (OuterVolumeSpecName: "kube-api-access-qj92j") pod "1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" (UID: "1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e"). InnerVolumeSpecName "kube-api-access-qj92j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:20:57 crc kubenswrapper[4675]: I1121 14:20:57.935867 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj92j\" (UniqueName: \"kubernetes.io/projected/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-kube-api-access-qj92j\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:57 crc kubenswrapper[4675]: I1121 14:20:57.936210 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:57 crc kubenswrapper[4675]: I1121 14:20:57.940583 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" (UID: "1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.038062 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.225536 4675 generic.go:334] "Generic (PLEG): container finished" podID="1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" containerID="70ae8b65a03adf4d7431aa1760a50cf5ade32663e5d40144f5f90da3bde2db5f" exitCode=0 Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.225595 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4wzp" event={"ID":"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e","Type":"ContainerDied","Data":"70ae8b65a03adf4d7431aa1760a50cf5ade32663e5d40144f5f90da3bde2db5f"} Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.225626 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4wzp" event={"ID":"1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e","Type":"ContainerDied","Data":"fa24b55dcc2730549598b4ba33c4e2fd39a9be03f8379967a4813086cbc7ae4e"} Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.225646 4675 scope.go:117] "RemoveContainer" containerID="70ae8b65a03adf4d7431aa1760a50cf5ade32663e5d40144f5f90da3bde2db5f" Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.225656 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4wzp" Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.257752 4675 scope.go:117] "RemoveContainer" containerID="0535073d27134c3b6940f34536ee8d4e7508e8be1970ec29467c9b08d6fa5a3f" Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.293671 4675 scope.go:117] "RemoveContainer" containerID="f2e06285ce639e9d8e77dff244ba4699e9eb0f4857a71b70cc142ba45bc6acc0" Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.297688 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g4wzp"] Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.311044 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g4wzp"] Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.356212 4675 scope.go:117] "RemoveContainer" containerID="70ae8b65a03adf4d7431aa1760a50cf5ade32663e5d40144f5f90da3bde2db5f" Nov 21 14:20:58 crc kubenswrapper[4675]: E1121 14:20:58.356692 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70ae8b65a03adf4d7431aa1760a50cf5ade32663e5d40144f5f90da3bde2db5f\": container with ID starting with 70ae8b65a03adf4d7431aa1760a50cf5ade32663e5d40144f5f90da3bde2db5f not found: ID does not exist" containerID="70ae8b65a03adf4d7431aa1760a50cf5ade32663e5d40144f5f90da3bde2db5f" Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.356755 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ae8b65a03adf4d7431aa1760a50cf5ade32663e5d40144f5f90da3bde2db5f"} err="failed to get container status \"70ae8b65a03adf4d7431aa1760a50cf5ade32663e5d40144f5f90da3bde2db5f\": rpc error: code = NotFound desc = could not find container \"70ae8b65a03adf4d7431aa1760a50cf5ade32663e5d40144f5f90da3bde2db5f\": container with ID starting with 70ae8b65a03adf4d7431aa1760a50cf5ade32663e5d40144f5f90da3bde2db5f not found: ID does not exist" Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.356785 4675 scope.go:117] "RemoveContainer" containerID="0535073d27134c3b6940f34536ee8d4e7508e8be1970ec29467c9b08d6fa5a3f" Nov 21 14:20:58 crc kubenswrapper[4675]: E1121 14:20:58.357126 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0535073d27134c3b6940f34536ee8d4e7508e8be1970ec29467c9b08d6fa5a3f\": container with ID starting with 0535073d27134c3b6940f34536ee8d4e7508e8be1970ec29467c9b08d6fa5a3f not found: ID does not exist" containerID="0535073d27134c3b6940f34536ee8d4e7508e8be1970ec29467c9b08d6fa5a3f" Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.357158 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0535073d27134c3b6940f34536ee8d4e7508e8be1970ec29467c9b08d6fa5a3f"} err="failed to get container status \"0535073d27134c3b6940f34536ee8d4e7508e8be1970ec29467c9b08d6fa5a3f\": rpc error: code = NotFound desc = could not find container \"0535073d27134c3b6940f34536ee8d4e7508e8be1970ec29467c9b08d6fa5a3f\": container with ID starting with 0535073d27134c3b6940f34536ee8d4e7508e8be1970ec29467c9b08d6fa5a3f not found: ID does not exist" Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.357198 4675 scope.go:117] "RemoveContainer" containerID="f2e06285ce639e9d8e77dff244ba4699e9eb0f4857a71b70cc142ba45bc6acc0" Nov 21 14:20:58 crc kubenswrapper[4675]: E1121 14:20:58.357423 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2e06285ce639e9d8e77dff244ba4699e9eb0f4857a71b70cc142ba45bc6acc0\": container with ID starting with f2e06285ce639e9d8e77dff244ba4699e9eb0f4857a71b70cc142ba45bc6acc0 not found: ID does not exist" containerID="f2e06285ce639e9d8e77dff244ba4699e9eb0f4857a71b70cc142ba45bc6acc0" Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.357455 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e06285ce639e9d8e77dff244ba4699e9eb0f4857a71b70cc142ba45bc6acc0"} err="failed to get container status \"f2e06285ce639e9d8e77dff244ba4699e9eb0f4857a71b70cc142ba45bc6acc0\": rpc error: code = NotFound desc = could not find container \"f2e06285ce639e9d8e77dff244ba4699e9eb0f4857a71b70cc142ba45bc6acc0\": container with ID starting with f2e06285ce639e9d8e77dff244ba4699e9eb0f4857a71b70cc142ba45bc6acc0 not found: ID does not exist" Nov 21 14:20:58 crc kubenswrapper[4675]: E1121 14:20:58.366412 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fcdbd3f_de62_42a1_a59e_fd58e7a5ab8e.slice/crio-fa24b55dcc2730549598b4ba33c4e2fd39a9be03f8379967a4813086cbc7ae4e\": RecentStats: unable to find data in memory cache]" Nov 21 14:20:58 crc kubenswrapper[4675]: I1121 14:20:58.862309 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" path="/var/lib/kubelet/pods/1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e/volumes" Nov 21 14:21:16 crc kubenswrapper[4675]: I1121 14:21:16.136759 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:21:16 crc kubenswrapper[4675]: I1121 14:21:16.137347 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:21:46 crc kubenswrapper[4675]: I1121 14:21:46.136286 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:21:46 crc kubenswrapper[4675]: I1121 14:21:46.137755 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.625820 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jpqvb"] Nov 21 14:22:05 crc kubenswrapper[4675]: E1121 14:22:05.627060 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c8b264-a66b-4959-b4db-3085aa8949b3" containerName="extract-content" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.627095 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c8b264-a66b-4959-b4db-3085aa8949b3" containerName="extract-content" Nov 21 14:22:05 crc kubenswrapper[4675]: E1121 14:22:05.627128 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c8b264-a66b-4959-b4db-3085aa8949b3" containerName="extract-utilities" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.627137 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c8b264-a66b-4959-b4db-3085aa8949b3" containerName="extract-utilities" Nov 21 14:22:05 crc kubenswrapper[4675]: E1121 14:22:05.627181 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" containerName="extract-utilities" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.627191 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" containerName="extract-utilities" Nov 21 14:22:05 crc kubenswrapper[4675]: E1121 14:22:05.627210 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" containerName="extract-content" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.627217 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" containerName="extract-content" Nov 21 14:22:05 crc kubenswrapper[4675]: E1121 14:22:05.627236 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" containerName="registry-server" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.627245 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" containerName="registry-server" Nov 21 14:22:05 crc kubenswrapper[4675]: E1121 14:22:05.627261 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c8b264-a66b-4959-b4db-3085aa8949b3" containerName="registry-server" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.627269 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c8b264-a66b-4959-b4db-3085aa8949b3" containerName="registry-server" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.627623 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c8b264-a66b-4959-b4db-3085aa8949b3" containerName="registry-server" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.627651 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fcdbd3f-de62-42a1-a59e-fd58e7a5ab8e" containerName="registry-server" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.661344 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jpqvb"] Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.661503 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.727243 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53707d1c-ef1b-441c-b2df-5422831326f2-utilities\") pod \"redhat-operators-jpqvb\" (UID: \"53707d1c-ef1b-441c-b2df-5422831326f2\") " pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.727455 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53707d1c-ef1b-441c-b2df-5422831326f2-catalog-content\") pod \"redhat-operators-jpqvb\" (UID: \"53707d1c-ef1b-441c-b2df-5422831326f2\") " pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.727484 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgstl\" (UniqueName: \"kubernetes.io/projected/53707d1c-ef1b-441c-b2df-5422831326f2-kube-api-access-vgstl\") pod \"redhat-operators-jpqvb\" (UID: \"53707d1c-ef1b-441c-b2df-5422831326f2\") " pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.828972 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53707d1c-ef1b-441c-b2df-5422831326f2-catalog-content\") pod \"redhat-operators-jpqvb\" (UID: \"53707d1c-ef1b-441c-b2df-5422831326f2\") " pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.829031 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgstl\" (UniqueName: \"kubernetes.io/projected/53707d1c-ef1b-441c-b2df-5422831326f2-kube-api-access-vgstl\") pod \"redhat-operators-jpqvb\" (UID: \"53707d1c-ef1b-441c-b2df-5422831326f2\") " pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.829198 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53707d1c-ef1b-441c-b2df-5422831326f2-utilities\") pod \"redhat-operators-jpqvb\" (UID: \"53707d1c-ef1b-441c-b2df-5422831326f2\") " pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.829603 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53707d1c-ef1b-441c-b2df-5422831326f2-catalog-content\") pod \"redhat-operators-jpqvb\" (UID: \"53707d1c-ef1b-441c-b2df-5422831326f2\") " pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.829644 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53707d1c-ef1b-441c-b2df-5422831326f2-utilities\") pod \"redhat-operators-jpqvb\" (UID: \"53707d1c-ef1b-441c-b2df-5422831326f2\") " pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.854144 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgstl\" (UniqueName: \"kubernetes.io/projected/53707d1c-ef1b-441c-b2df-5422831326f2-kube-api-access-vgstl\") pod \"redhat-operators-jpqvb\" (UID: \"53707d1c-ef1b-441c-b2df-5422831326f2\") " pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:05 crc kubenswrapper[4675]: I1121 14:22:05.996503 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:06 crc kubenswrapper[4675]: I1121 14:22:06.504087 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jpqvb"] Nov 21 14:22:06 crc kubenswrapper[4675]: I1121 14:22:06.963915 4675 generic.go:334] "Generic (PLEG): container finished" podID="53707d1c-ef1b-441c-b2df-5422831326f2" containerID="2aa8f1d093c7d384a92027f0d92a59a11ed9587961492c2e53618d84e6e27d9f" exitCode=0 Nov 21 14:22:06 crc kubenswrapper[4675]: I1121 14:22:06.963983 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpqvb" event={"ID":"53707d1c-ef1b-441c-b2df-5422831326f2","Type":"ContainerDied","Data":"2aa8f1d093c7d384a92027f0d92a59a11ed9587961492c2e53618d84e6e27d9f"} Nov 21 14:22:06 crc kubenswrapper[4675]: I1121 14:22:06.964023 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpqvb" event={"ID":"53707d1c-ef1b-441c-b2df-5422831326f2","Type":"ContainerStarted","Data":"e39864217bbfbd6642c0d2a1b783be6b2707487f603bdb0caa0d3333f78e14a5"} Nov 21 14:22:08 crc kubenswrapper[4675]: I1121 14:22:08.991351 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpqvb" event={"ID":"53707d1c-ef1b-441c-b2df-5422831326f2","Type":"ContainerStarted","Data":"75b02faa632ea0ffeea2fdd2eca0870430fe3e07b57ec6bd6c6b10cfaf785ace"} Nov 21 14:22:16 crc kubenswrapper[4675]: I1121 14:22:16.077902 4675 generic.go:334] "Generic (PLEG): container finished" podID="53707d1c-ef1b-441c-b2df-5422831326f2" containerID="75b02faa632ea0ffeea2fdd2eca0870430fe3e07b57ec6bd6c6b10cfaf785ace" exitCode=0 Nov 21 14:22:16 crc kubenswrapper[4675]: I1121 14:22:16.077986 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpqvb" event={"ID":"53707d1c-ef1b-441c-b2df-5422831326f2","Type":"ContainerDied","Data":"75b02faa632ea0ffeea2fdd2eca0870430fe3e07b57ec6bd6c6b10cfaf785ace"} Nov 21 14:22:16 crc kubenswrapper[4675]: I1121 14:22:16.136568 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:22:16 crc kubenswrapper[4675]: I1121 14:22:16.136635 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:22:16 crc kubenswrapper[4675]: I1121 14:22:16.136689 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 14:22:16 crc kubenswrapper[4675]: I1121 14:22:16.137689 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:22:16 crc kubenswrapper[4675]: I1121 14:22:16.137769 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" gracePeriod=600 Nov 21 14:22:16 crc kubenswrapper[4675]: E1121 14:22:16.259818 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:22:17 crc kubenswrapper[4675]: I1121 14:22:17.095325 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" exitCode=0 Nov 21 14:22:17 crc kubenswrapper[4675]: I1121 14:22:17.095382 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399"} Nov 21 14:22:17 crc kubenswrapper[4675]: I1121 14:22:17.095670 4675 scope.go:117] "RemoveContainer" containerID="fd06f4194ba69a1a4c4840b13f7ee009236c204361580f9b8e197db00af3a00d" Nov 21 14:22:17 crc kubenswrapper[4675]: I1121 14:22:17.096732 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:22:17 crc kubenswrapper[4675]: E1121 14:22:17.097197 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:22:17 crc kubenswrapper[4675]: I1121 14:22:17.099314 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpqvb" event={"ID":"53707d1c-ef1b-441c-b2df-5422831326f2","Type":"ContainerStarted","Data":"68ec34ebb0d87bd555e73ca90e075725e74ee1dd115d38e6c74a67feb92a27c9"} Nov 21 14:22:17 crc kubenswrapper[4675]: I1121 14:22:17.145210 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jpqvb" podStartSLOduration=2.333415199 podStartE2EDuration="12.145191031s" podCreationTimestamp="2025-11-21 14:22:05 +0000 UTC" firstStartedPulling="2025-11-21 14:22:06.967113309 +0000 UTC m=+3003.693528046" lastFinishedPulling="2025-11-21 14:22:16.778889121 +0000 UTC m=+3013.505303878" observedRunningTime="2025-11-21 14:22:17.139576769 +0000 UTC m=+3013.865991496" watchObservedRunningTime="2025-11-21 14:22:17.145191031 +0000 UTC m=+3013.871605758" Nov 21 14:22:25 crc kubenswrapper[4675]: I1121 14:22:25.997461 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:25 crc kubenswrapper[4675]: I1121 14:22:25.997969 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:26 crc kubenswrapper[4675]: I1121 14:22:26.050290 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:26 crc kubenswrapper[4675]: I1121 14:22:26.265588 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:26 crc kubenswrapper[4675]: I1121 14:22:26.333880 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jpqvb"] Nov 21 14:22:28 crc kubenswrapper[4675]: I1121 14:22:28.241904 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jpqvb" podUID="53707d1c-ef1b-441c-b2df-5422831326f2" containerName="registry-server" containerID="cri-o://68ec34ebb0d87bd555e73ca90e075725e74ee1dd115d38e6c74a67feb92a27c9" gracePeriod=2 Nov 21 14:22:28 crc kubenswrapper[4675]: I1121 14:22:28.764371 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:28 crc kubenswrapper[4675]: I1121 14:22:28.836548 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53707d1c-ef1b-441c-b2df-5422831326f2-utilities\") pod \"53707d1c-ef1b-441c-b2df-5422831326f2\" (UID: \"53707d1c-ef1b-441c-b2df-5422831326f2\") " Nov 21 14:22:28 crc kubenswrapper[4675]: I1121 14:22:28.836707 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53707d1c-ef1b-441c-b2df-5422831326f2-catalog-content\") pod \"53707d1c-ef1b-441c-b2df-5422831326f2\" (UID: \"53707d1c-ef1b-441c-b2df-5422831326f2\") " Nov 21 14:22:28 crc kubenswrapper[4675]: I1121 14:22:28.836760 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgstl\" (UniqueName: \"kubernetes.io/projected/53707d1c-ef1b-441c-b2df-5422831326f2-kube-api-access-vgstl\") pod \"53707d1c-ef1b-441c-b2df-5422831326f2\" (UID: \"53707d1c-ef1b-441c-b2df-5422831326f2\") " Nov 21 14:22:28 crc kubenswrapper[4675]: I1121 14:22:28.837591 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53707d1c-ef1b-441c-b2df-5422831326f2-utilities" (OuterVolumeSpecName: "utilities") pod "53707d1c-ef1b-441c-b2df-5422831326f2" (UID: "53707d1c-ef1b-441c-b2df-5422831326f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:22:28 crc kubenswrapper[4675]: I1121 14:22:28.837712 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53707d1c-ef1b-441c-b2df-5422831326f2-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:22:28 crc kubenswrapper[4675]: I1121 14:22:28.842813 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53707d1c-ef1b-441c-b2df-5422831326f2-kube-api-access-vgstl" (OuterVolumeSpecName: "kube-api-access-vgstl") pod "53707d1c-ef1b-441c-b2df-5422831326f2" (UID: "53707d1c-ef1b-441c-b2df-5422831326f2"). InnerVolumeSpecName "kube-api-access-vgstl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:22:28 crc kubenswrapper[4675]: I1121 14:22:28.930601 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53707d1c-ef1b-441c-b2df-5422831326f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53707d1c-ef1b-441c-b2df-5422831326f2" (UID: "53707d1c-ef1b-441c-b2df-5422831326f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:22:28 crc kubenswrapper[4675]: I1121 14:22:28.940469 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53707d1c-ef1b-441c-b2df-5422831326f2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:22:28 crc kubenswrapper[4675]: I1121 14:22:28.940508 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgstl\" (UniqueName: \"kubernetes.io/projected/53707d1c-ef1b-441c-b2df-5422831326f2-kube-api-access-vgstl\") on node \"crc\" DevicePath \"\"" Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.276582 4675 generic.go:334] "Generic (PLEG): container finished" podID="53707d1c-ef1b-441c-b2df-5422831326f2" containerID="68ec34ebb0d87bd555e73ca90e075725e74ee1dd115d38e6c74a67feb92a27c9" exitCode=0 Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.276631 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpqvb" event={"ID":"53707d1c-ef1b-441c-b2df-5422831326f2","Type":"ContainerDied","Data":"68ec34ebb0d87bd555e73ca90e075725e74ee1dd115d38e6c74a67feb92a27c9"} Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.276661 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jpqvb" event={"ID":"53707d1c-ef1b-441c-b2df-5422831326f2","Type":"ContainerDied","Data":"e39864217bbfbd6642c0d2a1b783be6b2707487f603bdb0caa0d3333f78e14a5"} Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.276679 4675 scope.go:117] "RemoveContainer" containerID="68ec34ebb0d87bd555e73ca90e075725e74ee1dd115d38e6c74a67feb92a27c9" Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.276848 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jpqvb" Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.366028 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jpqvb"] Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.368406 4675 scope.go:117] "RemoveContainer" containerID="75b02faa632ea0ffeea2fdd2eca0870430fe3e07b57ec6bd6c6b10cfaf785ace" Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.408652 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jpqvb"] Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.440600 4675 scope.go:117] "RemoveContainer" containerID="2aa8f1d093c7d384a92027f0d92a59a11ed9587961492c2e53618d84e6e27d9f" Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.470461 4675 scope.go:117] "RemoveContainer" containerID="68ec34ebb0d87bd555e73ca90e075725e74ee1dd115d38e6c74a67feb92a27c9" Nov 21 14:22:29 crc kubenswrapper[4675]: E1121 14:22:29.471296 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68ec34ebb0d87bd555e73ca90e075725e74ee1dd115d38e6c74a67feb92a27c9\": container with ID starting with 68ec34ebb0d87bd555e73ca90e075725e74ee1dd115d38e6c74a67feb92a27c9 not found: ID does not exist" containerID="68ec34ebb0d87bd555e73ca90e075725e74ee1dd115d38e6c74a67feb92a27c9" Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.471372 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68ec34ebb0d87bd555e73ca90e075725e74ee1dd115d38e6c74a67feb92a27c9"} err="failed to get container status \"68ec34ebb0d87bd555e73ca90e075725e74ee1dd115d38e6c74a67feb92a27c9\": rpc error: code = NotFound desc = could not find container \"68ec34ebb0d87bd555e73ca90e075725e74ee1dd115d38e6c74a67feb92a27c9\": container with ID starting with 68ec34ebb0d87bd555e73ca90e075725e74ee1dd115d38e6c74a67feb92a27c9 not found: ID does not exist" Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.471400 4675 scope.go:117] "RemoveContainer" containerID="75b02faa632ea0ffeea2fdd2eca0870430fe3e07b57ec6bd6c6b10cfaf785ace" Nov 21 14:22:29 crc kubenswrapper[4675]: E1121 14:22:29.471681 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75b02faa632ea0ffeea2fdd2eca0870430fe3e07b57ec6bd6c6b10cfaf785ace\": container with ID starting with 75b02faa632ea0ffeea2fdd2eca0870430fe3e07b57ec6bd6c6b10cfaf785ace not found: ID does not exist" containerID="75b02faa632ea0ffeea2fdd2eca0870430fe3e07b57ec6bd6c6b10cfaf785ace" Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.471713 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b02faa632ea0ffeea2fdd2eca0870430fe3e07b57ec6bd6c6b10cfaf785ace"} err="failed to get container status \"75b02faa632ea0ffeea2fdd2eca0870430fe3e07b57ec6bd6c6b10cfaf785ace\": rpc error: code = NotFound desc = could not find container \"75b02faa632ea0ffeea2fdd2eca0870430fe3e07b57ec6bd6c6b10cfaf785ace\": container with ID starting with 75b02faa632ea0ffeea2fdd2eca0870430fe3e07b57ec6bd6c6b10cfaf785ace not found: ID does not exist" Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.471732 4675 scope.go:117] "RemoveContainer" containerID="2aa8f1d093c7d384a92027f0d92a59a11ed9587961492c2e53618d84e6e27d9f" Nov 21 14:22:29 crc kubenswrapper[4675]: E1121 14:22:29.473141 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aa8f1d093c7d384a92027f0d92a59a11ed9587961492c2e53618d84e6e27d9f\": container with ID starting with 2aa8f1d093c7d384a92027f0d92a59a11ed9587961492c2e53618d84e6e27d9f not found: ID does not exist" containerID="2aa8f1d093c7d384a92027f0d92a59a11ed9587961492c2e53618d84e6e27d9f" Nov 21 14:22:29 crc kubenswrapper[4675]: I1121 14:22:29.473177 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aa8f1d093c7d384a92027f0d92a59a11ed9587961492c2e53618d84e6e27d9f"} err="failed to get container status \"2aa8f1d093c7d384a92027f0d92a59a11ed9587961492c2e53618d84e6e27d9f\": rpc error: code = NotFound desc = could not find container \"2aa8f1d093c7d384a92027f0d92a59a11ed9587961492c2e53618d84e6e27d9f\": container with ID starting with 2aa8f1d093c7d384a92027f0d92a59a11ed9587961492c2e53618d84e6e27d9f not found: ID does not exist" Nov 21 14:22:30 crc kubenswrapper[4675]: I1121 14:22:30.865876 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53707d1c-ef1b-441c-b2df-5422831326f2" path="/var/lib/kubelet/pods/53707d1c-ef1b-441c-b2df-5422831326f2/volumes" Nov 21 14:22:31 crc kubenswrapper[4675]: I1121 14:22:31.849053 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:22:31 crc kubenswrapper[4675]: E1121 14:22:31.849662 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:22:45 crc kubenswrapper[4675]: I1121 14:22:45.849220 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:22:45 crc kubenswrapper[4675]: E1121 14:22:45.850153 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:22:58 crc kubenswrapper[4675]: I1121 14:22:58.849926 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:22:58 crc kubenswrapper[4675]: E1121 14:22:58.850866 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:23:10 crc kubenswrapper[4675]: I1121 14:23:10.851444 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:23:10 crc kubenswrapper[4675]: E1121 14:23:10.852247 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:23:17 crc kubenswrapper[4675]: I1121 14:23:17.821377 4675 generic.go:334] "Generic (PLEG): container finished" podID="2205f0b5-339c-4165-84fd-9c9f117d757f" containerID="882a9389b4ced5843e0ea4ee99efd6867cea64264815bc6bb025717e051967fe" exitCode=0 Nov 21 14:23:17 crc kubenswrapper[4675]: I1121 14:23:17.821467 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" event={"ID":"2205f0b5-339c-4165-84fd-9c9f117d757f","Type":"ContainerDied","Data":"882a9389b4ced5843e0ea4ee99efd6867cea64264815bc6bb025717e051967fe"} Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.315979 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.421099 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25fdn\" (UniqueName: \"kubernetes.io/projected/2205f0b5-339c-4165-84fd-9c9f117d757f-kube-api-access-25fdn\") pod \"2205f0b5-339c-4165-84fd-9c9f117d757f\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.421419 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-migration-ssh-key-0\") pod \"2205f0b5-339c-4165-84fd-9c9f117d757f\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.421449 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-ssh-key\") pod \"2205f0b5-339c-4165-84fd-9c9f117d757f\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.421489 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-combined-ca-bundle\") pod \"2205f0b5-339c-4165-84fd-9c9f117d757f\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.421536 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-cell1-compute-config-0\") pod \"2205f0b5-339c-4165-84fd-9c9f117d757f\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.422096 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-extra-config-0\") pod \"2205f0b5-339c-4165-84fd-9c9f117d757f\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.422167 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-migration-ssh-key-1\") pod \"2205f0b5-339c-4165-84fd-9c9f117d757f\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.422272 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-cell1-compute-config-1\") pod \"2205f0b5-339c-4165-84fd-9c9f117d757f\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.422387 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-inventory\") pod \"2205f0b5-339c-4165-84fd-9c9f117d757f\" (UID: \"2205f0b5-339c-4165-84fd-9c9f117d757f\") " Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.430974 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2205f0b5-339c-4165-84fd-9c9f117d757f-kube-api-access-25fdn" (OuterVolumeSpecName: "kube-api-access-25fdn") pod "2205f0b5-339c-4165-84fd-9c9f117d757f" (UID: "2205f0b5-339c-4165-84fd-9c9f117d757f"). InnerVolumeSpecName "kube-api-access-25fdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.450775 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2205f0b5-339c-4165-84fd-9c9f117d757f" (UID: "2205f0b5-339c-4165-84fd-9c9f117d757f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.457495 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "2205f0b5-339c-4165-84fd-9c9f117d757f" (UID: "2205f0b5-339c-4165-84fd-9c9f117d757f"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.466691 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "2205f0b5-339c-4165-84fd-9c9f117d757f" (UID: "2205f0b5-339c-4165-84fd-9c9f117d757f"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.470062 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-inventory" (OuterVolumeSpecName: "inventory") pod "2205f0b5-339c-4165-84fd-9c9f117d757f" (UID: "2205f0b5-339c-4165-84fd-9c9f117d757f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.476253 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2205f0b5-339c-4165-84fd-9c9f117d757f" (UID: "2205f0b5-339c-4165-84fd-9c9f117d757f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.477775 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "2205f0b5-339c-4165-84fd-9c9f117d757f" (UID: "2205f0b5-339c-4165-84fd-9c9f117d757f"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.488252 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "2205f0b5-339c-4165-84fd-9c9f117d757f" (UID: "2205f0b5-339c-4165-84fd-9c9f117d757f"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.496610 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "2205f0b5-339c-4165-84fd-9c9f117d757f" (UID: "2205f0b5-339c-4165-84fd-9c9f117d757f"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.525254 4675 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.525292 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.525301 4675 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.525311 4675 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.525322 4675 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.525329 4675 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.525337 4675 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.525345 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2205f0b5-339c-4165-84fd-9c9f117d757f-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.525354 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25fdn\" (UniqueName: \"kubernetes.io/projected/2205f0b5-339c-4165-84fd-9c9f117d757f-kube-api-access-25fdn\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.844476 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" event={"ID":"2205f0b5-339c-4165-84fd-9c9f117d757f","Type":"ContainerDied","Data":"2628ca45f5870f78098580bb81bb78bbfb91f2bb5b6770058fd4e6ca58736aee"} Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.844761 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2628ca45f5870f78098580bb81bb78bbfb91f2bb5b6770058fd4e6ca58736aee" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.844570 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vlb78" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.957681 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25"] Nov 21 14:23:19 crc kubenswrapper[4675]: E1121 14:23:19.958250 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53707d1c-ef1b-441c-b2df-5422831326f2" containerName="registry-server" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.958268 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="53707d1c-ef1b-441c-b2df-5422831326f2" containerName="registry-server" Nov 21 14:23:19 crc kubenswrapper[4675]: E1121 14:23:19.958294 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53707d1c-ef1b-441c-b2df-5422831326f2" containerName="extract-content" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.958300 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="53707d1c-ef1b-441c-b2df-5422831326f2" containerName="extract-content" Nov 21 14:23:19 crc kubenswrapper[4675]: E1121 14:23:19.958351 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53707d1c-ef1b-441c-b2df-5422831326f2" containerName="extract-utilities" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.958364 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="53707d1c-ef1b-441c-b2df-5422831326f2" containerName="extract-utilities" Nov 21 14:23:19 crc kubenswrapper[4675]: E1121 14:23:19.958373 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2205f0b5-339c-4165-84fd-9c9f117d757f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.958380 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="2205f0b5-339c-4165-84fd-9c9f117d757f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.958576 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="53707d1c-ef1b-441c-b2df-5422831326f2" containerName="registry-server" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.958604 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="2205f0b5-339c-4165-84fd-9c9f117d757f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.959488 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.961651 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.961976 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.962341 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.962518 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.967852 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:23:19 crc kubenswrapper[4675]: I1121 14:23:19.970423 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25"] Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.039041 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.039167 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.039215 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.039306 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.039345 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-299p4\" (UniqueName: \"kubernetes.io/projected/6068452e-fbc5-44c6-8141-d3b8b3de6f92-kube-api-access-299p4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.039505 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.039547 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.141801 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.141871 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.141933 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.142020 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.142084 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-299p4\" (UniqueName: \"kubernetes.io/projected/6068452e-fbc5-44c6-8141-d3b8b3de6f92-kube-api-access-299p4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.142267 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.142299 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.147024 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.147040 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.147531 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.148238 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.149250 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.152492 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.158520 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-299p4\" (UniqueName: \"kubernetes.io/projected/6068452e-fbc5-44c6-8141-d3b8b3de6f92-kube-api-access-299p4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pqp25\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.292976 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:23:20 crc kubenswrapper[4675]: I1121 14:23:20.863478 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25"] Nov 21 14:23:21 crc kubenswrapper[4675]: I1121 14:23:21.866507 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" event={"ID":"6068452e-fbc5-44c6-8141-d3b8b3de6f92","Type":"ContainerStarted","Data":"da367f320dad0ac04da49b9ae218c25464af110511fcb2380681078881ec9796"} Nov 21 14:23:21 crc kubenswrapper[4675]: I1121 14:23:21.866768 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" event={"ID":"6068452e-fbc5-44c6-8141-d3b8b3de6f92","Type":"ContainerStarted","Data":"6064484faf14aac1fcf79e8872f6937c481f02503d9c9cd27954fbe8cd4136e4"} Nov 21 14:23:21 crc kubenswrapper[4675]: I1121 14:23:21.884723 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" podStartSLOduration=2.195346302 podStartE2EDuration="2.884701539s" podCreationTimestamp="2025-11-21 14:23:19 +0000 UTC" firstStartedPulling="2025-11-21 14:23:20.861301327 +0000 UTC m=+3077.587716054" lastFinishedPulling="2025-11-21 14:23:21.550656564 +0000 UTC m=+3078.277071291" observedRunningTime="2025-11-21 14:23:21.880155574 +0000 UTC m=+3078.606570301" watchObservedRunningTime="2025-11-21 14:23:21.884701539 +0000 UTC m=+3078.611116266" Nov 21 14:23:24 crc kubenswrapper[4675]: I1121 14:23:24.861136 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:23:24 crc kubenswrapper[4675]: E1121 14:23:24.861939 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:23:37 crc kubenswrapper[4675]: I1121 14:23:37.849788 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:23:37 crc kubenswrapper[4675]: E1121 14:23:37.850687 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:23:48 crc kubenswrapper[4675]: I1121 14:23:48.849389 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:23:48 crc kubenswrapper[4675]: E1121 14:23:48.850430 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:24:03 crc kubenswrapper[4675]: I1121 14:24:03.849483 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:24:03 crc kubenswrapper[4675]: E1121 14:24:03.850418 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:24:18 crc kubenswrapper[4675]: I1121 14:24:18.850242 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:24:18 crc kubenswrapper[4675]: E1121 14:24:18.851032 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:24:33 crc kubenswrapper[4675]: I1121 14:24:33.849681 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:24:33 crc kubenswrapper[4675]: E1121 14:24:33.850482 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:24:48 crc kubenswrapper[4675]: I1121 14:24:48.850203 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:24:48 crc kubenswrapper[4675]: E1121 14:24:48.851216 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:25:02 crc kubenswrapper[4675]: I1121 14:25:02.849541 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:25:02 crc kubenswrapper[4675]: E1121 14:25:02.850715 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:25:13 crc kubenswrapper[4675]: I1121 14:25:13.849810 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:25:13 crc kubenswrapper[4675]: E1121 14:25:13.850573 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:25:26 crc kubenswrapper[4675]: I1121 14:25:26.850011 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:25:26 crc kubenswrapper[4675]: E1121 14:25:26.850973 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:25:31 crc kubenswrapper[4675]: I1121 14:25:31.279832 4675 generic.go:334] "Generic (PLEG): container finished" podID="6068452e-fbc5-44c6-8141-d3b8b3de6f92" containerID="da367f320dad0ac04da49b9ae218c25464af110511fcb2380681078881ec9796" exitCode=0 Nov 21 14:25:31 crc kubenswrapper[4675]: I1121 14:25:31.279911 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" event={"ID":"6068452e-fbc5-44c6-8141-d3b8b3de6f92","Type":"ContainerDied","Data":"da367f320dad0ac04da49b9ae218c25464af110511fcb2380681078881ec9796"} Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.819986 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.879750 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-0\") pod \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.879816 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-inventory\") pod \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.879872 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-2\") pod \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.880023 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-1\") pod \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.880046 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ssh-key\") pod \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.880165 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-299p4\" (UniqueName: \"kubernetes.io/projected/6068452e-fbc5-44c6-8141-d3b8b3de6f92-kube-api-access-299p4\") pod \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.880212 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-telemetry-combined-ca-bundle\") pod \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\" (UID: \"6068452e-fbc5-44c6-8141-d3b8b3de6f92\") " Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.895295 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "6068452e-fbc5-44c6-8141-d3b8b3de6f92" (UID: "6068452e-fbc5-44c6-8141-d3b8b3de6f92"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.896598 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6068452e-fbc5-44c6-8141-d3b8b3de6f92-kube-api-access-299p4" (OuterVolumeSpecName: "kube-api-access-299p4") pod "6068452e-fbc5-44c6-8141-d3b8b3de6f92" (UID: "6068452e-fbc5-44c6-8141-d3b8b3de6f92"). InnerVolumeSpecName "kube-api-access-299p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.912207 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-inventory" (OuterVolumeSpecName: "inventory") pod "6068452e-fbc5-44c6-8141-d3b8b3de6f92" (UID: "6068452e-fbc5-44c6-8141-d3b8b3de6f92"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.914176 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "6068452e-fbc5-44c6-8141-d3b8b3de6f92" (UID: "6068452e-fbc5-44c6-8141-d3b8b3de6f92"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.926497 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "6068452e-fbc5-44c6-8141-d3b8b3de6f92" (UID: "6068452e-fbc5-44c6-8141-d3b8b3de6f92"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.927722 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6068452e-fbc5-44c6-8141-d3b8b3de6f92" (UID: "6068452e-fbc5-44c6-8141-d3b8b3de6f92"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.943234 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "6068452e-fbc5-44c6-8141-d3b8b3de6f92" (UID: "6068452e-fbc5-44c6-8141-d3b8b3de6f92"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.983440 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-299p4\" (UniqueName: \"kubernetes.io/projected/6068452e-fbc5-44c6-8141-d3b8b3de6f92-kube-api-access-299p4\") on node \"crc\" DevicePath \"\"" Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.983501 4675 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.983514 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.983527 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.983539 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.983554 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 21 14:25:32 crc kubenswrapper[4675]: I1121 14:25:32.983563 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6068452e-fbc5-44c6-8141-d3b8b3de6f92-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.312546 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" event={"ID":"6068452e-fbc5-44c6-8141-d3b8b3de6f92","Type":"ContainerDied","Data":"6064484faf14aac1fcf79e8872f6937c481f02503d9c9cd27954fbe8cd4136e4"} Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.312601 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6064484faf14aac1fcf79e8872f6937c481f02503d9c9cd27954fbe8cd4136e4" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.312659 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pqp25" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.405407 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg"] Nov 21 14:25:33 crc kubenswrapper[4675]: E1121 14:25:33.405926 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6068452e-fbc5-44c6-8141-d3b8b3de6f92" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.405947 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6068452e-fbc5-44c6-8141-d3b8b3de6f92" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.406265 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6068452e-fbc5-44c6-8141-d3b8b3de6f92" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.407312 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.410250 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.410297 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.410766 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.411211 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.413634 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.421014 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg"] Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.505327 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.505420 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.505480 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlhvr\" (UniqueName: \"kubernetes.io/projected/087ded3f-0cd4-4471-b0b8-f23a7de03a26-kube-api-access-rlhvr\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.505837 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.505906 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.506175 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.506226 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.608497 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlhvr\" (UniqueName: \"kubernetes.io/projected/087ded3f-0cd4-4471-b0b8-f23a7de03a26-kube-api-access-rlhvr\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.608597 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.608632 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.608688 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.608713 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.608769 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.608851 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.612964 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.613535 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.614329 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.615121 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.615959 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.616677 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.635523 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlhvr\" (UniqueName: \"kubernetes.io/projected/087ded3f-0cd4-4471-b0b8-f23a7de03a26-kube-api-access-rlhvr\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:33 crc kubenswrapper[4675]: I1121 14:25:33.728292 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:25:34 crc kubenswrapper[4675]: I1121 14:25:34.267440 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg"] Nov 21 14:25:34 crc kubenswrapper[4675]: I1121 14:25:34.280054 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:25:34 crc kubenswrapper[4675]: I1121 14:25:34.326218 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" event={"ID":"087ded3f-0cd4-4471-b0b8-f23a7de03a26","Type":"ContainerStarted","Data":"2edb48829f5db71eaaf90cde2b3594c9115115fbdf7b20a6c78bf539990b5405"} Nov 21 14:25:35 crc kubenswrapper[4675]: I1121 14:25:35.337199 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" event={"ID":"087ded3f-0cd4-4471-b0b8-f23a7de03a26","Type":"ContainerStarted","Data":"62e3aa14f56931715705d62cc2b46eb07e19c30d118218551fc158dd54d189d0"} Nov 21 14:25:35 crc kubenswrapper[4675]: I1121 14:25:35.362469 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" podStartSLOduration=1.9523836559999999 podStartE2EDuration="2.36244876s" podCreationTimestamp="2025-11-21 14:25:33 +0000 UTC" firstStartedPulling="2025-11-21 14:25:34.279813333 +0000 UTC m=+3211.006228060" lastFinishedPulling="2025-11-21 14:25:34.689878437 +0000 UTC m=+3211.416293164" observedRunningTime="2025-11-21 14:25:35.353939335 +0000 UTC m=+3212.080354062" watchObservedRunningTime="2025-11-21 14:25:35.36244876 +0000 UTC m=+3212.088863487" Nov 21 14:25:38 crc kubenswrapper[4675]: I1121 14:25:38.851397 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:25:38 crc kubenswrapper[4675]: E1121 14:25:38.852475 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:25:53 crc kubenswrapper[4675]: I1121 14:25:53.849474 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:25:53 crc kubenswrapper[4675]: E1121 14:25:53.850439 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:26:08 crc kubenswrapper[4675]: I1121 14:26:08.849052 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:26:08 crc kubenswrapper[4675]: E1121 14:26:08.849845 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:26:21 crc kubenswrapper[4675]: I1121 14:26:21.849430 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:26:21 crc kubenswrapper[4675]: E1121 14:26:21.850635 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:26:34 crc kubenswrapper[4675]: I1121 14:26:34.856667 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:26:34 crc kubenswrapper[4675]: E1121 14:26:34.857521 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:26:48 crc kubenswrapper[4675]: I1121 14:26:48.849501 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:26:48 crc kubenswrapper[4675]: E1121 14:26:48.850362 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:27:01 crc kubenswrapper[4675]: I1121 14:27:01.850751 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:27:01 crc kubenswrapper[4675]: E1121 14:27:01.851732 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:27:12 crc kubenswrapper[4675]: I1121 14:27:12.849825 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:27:12 crc kubenswrapper[4675]: E1121 14:27:12.850903 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:27:26 crc kubenswrapper[4675]: I1121 14:27:26.850115 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:27:27 crc kubenswrapper[4675]: I1121 14:27:27.976603 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"b7be1ca3661b800d7e1be3df35e623bd7dc17c97289051a01313c20529119f4c"} Nov 21 14:27:31 crc kubenswrapper[4675]: I1121 14:27:31.009471 4675 generic.go:334] "Generic (PLEG): container finished" podID="087ded3f-0cd4-4471-b0b8-f23a7de03a26" containerID="62e3aa14f56931715705d62cc2b46eb07e19c30d118218551fc158dd54d189d0" exitCode=0 Nov 21 14:27:31 crc kubenswrapper[4675]: I1121 14:27:31.009569 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" event={"ID":"087ded3f-0cd4-4471-b0b8-f23a7de03a26","Type":"ContainerDied","Data":"62e3aa14f56931715705d62cc2b46eb07e19c30d118218551fc158dd54d189d0"} Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.509335 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.563019 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlhvr\" (UniqueName: \"kubernetes.io/projected/087ded3f-0cd4-4471-b0b8-f23a7de03a26-kube-api-access-rlhvr\") pod \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.563155 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-inventory\") pod \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.563212 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-telemetry-power-monitoring-combined-ca-bundle\") pod \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.563256 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-2\") pod \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.563352 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-1\") pod \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.563376 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-0\") pod \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.563455 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ssh-key\") pod \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\" (UID: \"087ded3f-0cd4-4471-b0b8-f23a7de03a26\") " Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.569170 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "087ded3f-0cd4-4471-b0b8-f23a7de03a26" (UID: "087ded3f-0cd4-4471-b0b8-f23a7de03a26"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.581315 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/087ded3f-0cd4-4471-b0b8-f23a7de03a26-kube-api-access-rlhvr" (OuterVolumeSpecName: "kube-api-access-rlhvr") pod "087ded3f-0cd4-4471-b0b8-f23a7de03a26" (UID: "087ded3f-0cd4-4471-b0b8-f23a7de03a26"). InnerVolumeSpecName "kube-api-access-rlhvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.603128 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "087ded3f-0cd4-4471-b0b8-f23a7de03a26" (UID: "087ded3f-0cd4-4471-b0b8-f23a7de03a26"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.603419 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-inventory" (OuterVolumeSpecName: "inventory") pod "087ded3f-0cd4-4471-b0b8-f23a7de03a26" (UID: "087ded3f-0cd4-4471-b0b8-f23a7de03a26"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.612968 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "087ded3f-0cd4-4471-b0b8-f23a7de03a26" (UID: "087ded3f-0cd4-4471-b0b8-f23a7de03a26"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.622302 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "087ded3f-0cd4-4471-b0b8-f23a7de03a26" (UID: "087ded3f-0cd4-4471-b0b8-f23a7de03a26"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.626639 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "087ded3f-0cd4-4471-b0b8-f23a7de03a26" (UID: "087ded3f-0cd4-4471-b0b8-f23a7de03a26"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.667306 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.667343 4675 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.667358 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.667373 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.667386 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.667400 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/087ded3f-0cd4-4471-b0b8-f23a7de03a26-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:27:32 crc kubenswrapper[4675]: I1121 14:27:32.667413 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlhvr\" (UniqueName: \"kubernetes.io/projected/087ded3f-0cd4-4471-b0b8-f23a7de03a26-kube-api-access-rlhvr\") on node \"crc\" DevicePath \"\"" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.035285 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" event={"ID":"087ded3f-0cd4-4471-b0b8-f23a7de03a26","Type":"ContainerDied","Data":"2edb48829f5db71eaaf90cde2b3594c9115115fbdf7b20a6c78bf539990b5405"} Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.035756 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2edb48829f5db71eaaf90cde2b3594c9115115fbdf7b20a6c78bf539990b5405" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.035889 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.140748 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd"] Nov 21 14:27:33 crc kubenswrapper[4675]: E1121 14:27:33.141499 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="087ded3f-0cd4-4471-b0b8-f23a7de03a26" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.141533 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="087ded3f-0cd4-4471-b0b8-f23a7de03a26" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.142021 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="087ded3f-0cd4-4471-b0b8-f23a7de03a26" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.147510 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.153798 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.153798 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-lgzjk" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.155231 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd"] Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.158061 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.158515 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.158515 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.286671 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.286836 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.287148 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.287210 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.287301 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjkrh\" (UniqueName: \"kubernetes.io/projected/88b1961a-032d-40c5-83f7-602511b7808e-kube-api-access-qjkrh\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.389761 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjkrh\" (UniqueName: \"kubernetes.io/projected/88b1961a-032d-40c5-83f7-602511b7808e-kube-api-access-qjkrh\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.389841 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.389902 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.390029 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.390059 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.395291 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.395414 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.396314 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.403680 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.410972 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjkrh\" (UniqueName: \"kubernetes.io/projected/88b1961a-032d-40c5-83f7-602511b7808e-kube-api-access-qjkrh\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mkqjd\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:33 crc kubenswrapper[4675]: I1121 14:27:33.470867 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:34 crc kubenswrapper[4675]: I1121 14:27:34.051658 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd"] Nov 21 14:27:35 crc kubenswrapper[4675]: I1121 14:27:35.060832 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" event={"ID":"88b1961a-032d-40c5-83f7-602511b7808e","Type":"ContainerStarted","Data":"066ae898660a4abbd145cf53e95ab68411df96a4fdaacc011958079ab5ee3396"} Nov 21 14:27:35 crc kubenswrapper[4675]: I1121 14:27:35.061473 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" event={"ID":"88b1961a-032d-40c5-83f7-602511b7808e","Type":"ContainerStarted","Data":"7892250df8e5f0fc0283ca79dd3b2de1f96f9cb976507fdeeabfbe9125e1a2c3"} Nov 21 14:27:35 crc kubenswrapper[4675]: I1121 14:27:35.085140 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" podStartSLOduration=1.6406438859999999 podStartE2EDuration="2.085116979s" podCreationTimestamp="2025-11-21 14:27:33 +0000 UTC" firstStartedPulling="2025-11-21 14:27:34.056490965 +0000 UTC m=+3330.782905692" lastFinishedPulling="2025-11-21 14:27:34.500964008 +0000 UTC m=+3331.227378785" observedRunningTime="2025-11-21 14:27:35.075818714 +0000 UTC m=+3331.802233441" watchObservedRunningTime="2025-11-21 14:27:35.085116979 +0000 UTC m=+3331.811531706" Nov 21 14:27:50 crc kubenswrapper[4675]: I1121 14:27:50.235757 4675 generic.go:334] "Generic (PLEG): container finished" podID="88b1961a-032d-40c5-83f7-602511b7808e" containerID="066ae898660a4abbd145cf53e95ab68411df96a4fdaacc011958079ab5ee3396" exitCode=0 Nov 21 14:27:50 crc kubenswrapper[4675]: I1121 14:27:50.235916 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" event={"ID":"88b1961a-032d-40c5-83f7-602511b7808e","Type":"ContainerDied","Data":"066ae898660a4abbd145cf53e95ab68411df96a4fdaacc011958079ab5ee3396"} Nov 21 14:27:51 crc kubenswrapper[4675]: I1121 14:27:51.771842 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:27:51 crc kubenswrapper[4675]: I1121 14:27:51.904520 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-logging-compute-config-data-1\") pod \"88b1961a-032d-40c5-83f7-602511b7808e\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " Nov 21 14:27:51 crc kubenswrapper[4675]: I1121 14:27:51.904622 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-logging-compute-config-data-0\") pod \"88b1961a-032d-40c5-83f7-602511b7808e\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " Nov 21 14:27:51 crc kubenswrapper[4675]: I1121 14:27:51.904668 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-inventory\") pod \"88b1961a-032d-40c5-83f7-602511b7808e\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " Nov 21 14:27:51 crc kubenswrapper[4675]: I1121 14:27:51.904749 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjkrh\" (UniqueName: \"kubernetes.io/projected/88b1961a-032d-40c5-83f7-602511b7808e-kube-api-access-qjkrh\") pod \"88b1961a-032d-40c5-83f7-602511b7808e\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " Nov 21 14:27:51 crc kubenswrapper[4675]: I1121 14:27:51.904855 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-ssh-key\") pod \"88b1961a-032d-40c5-83f7-602511b7808e\" (UID: \"88b1961a-032d-40c5-83f7-602511b7808e\") " Nov 21 14:27:51 crc kubenswrapper[4675]: I1121 14:27:51.922260 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b1961a-032d-40c5-83f7-602511b7808e-kube-api-access-qjkrh" (OuterVolumeSpecName: "kube-api-access-qjkrh") pod "88b1961a-032d-40c5-83f7-602511b7808e" (UID: "88b1961a-032d-40c5-83f7-602511b7808e"). InnerVolumeSpecName "kube-api-access-qjkrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:27:51 crc kubenswrapper[4675]: I1121 14:27:51.938715 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "88b1961a-032d-40c5-83f7-602511b7808e" (UID: "88b1961a-032d-40c5-83f7-602511b7808e"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:27:51 crc kubenswrapper[4675]: I1121 14:27:51.950900 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-inventory" (OuterVolumeSpecName: "inventory") pod "88b1961a-032d-40c5-83f7-602511b7808e" (UID: "88b1961a-032d-40c5-83f7-602511b7808e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:27:51 crc kubenswrapper[4675]: I1121 14:27:51.953009 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "88b1961a-032d-40c5-83f7-602511b7808e" (UID: "88b1961a-032d-40c5-83f7-602511b7808e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:27:51 crc kubenswrapper[4675]: I1121 14:27:51.954581 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "88b1961a-032d-40c5-83f7-602511b7808e" (UID: "88b1961a-032d-40c5-83f7-602511b7808e"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:27:52 crc kubenswrapper[4675]: I1121 14:27:52.008181 4675 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 21 14:27:52 crc kubenswrapper[4675]: I1121 14:27:52.008253 4675 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:27:52 crc kubenswrapper[4675]: I1121 14:27:52.008265 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:27:52 crc kubenswrapper[4675]: I1121 14:27:52.008277 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjkrh\" (UniqueName: \"kubernetes.io/projected/88b1961a-032d-40c5-83f7-602511b7808e-kube-api-access-qjkrh\") on node \"crc\" DevicePath \"\"" Nov 21 14:27:52 crc kubenswrapper[4675]: I1121 14:27:52.008289 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88b1961a-032d-40c5-83f7-602511b7808e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:27:52 crc kubenswrapper[4675]: I1121 14:27:52.262500 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" event={"ID":"88b1961a-032d-40c5-83f7-602511b7808e","Type":"ContainerDied","Data":"7892250df8e5f0fc0283ca79dd3b2de1f96f9cb976507fdeeabfbe9125e1a2c3"} Nov 21 14:27:52 crc kubenswrapper[4675]: I1121 14:27:52.262759 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7892250df8e5f0fc0283ca79dd3b2de1f96f9cb976507fdeeabfbe9125e1a2c3" Nov 21 14:27:52 crc kubenswrapper[4675]: I1121 14:27:52.262613 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mkqjd" Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.473306 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v97xn"] Nov 21 14:29:05 crc kubenswrapper[4675]: E1121 14:29:05.474245 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b1961a-032d-40c5-83f7-602511b7808e" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.474258 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b1961a-032d-40c5-83f7-602511b7808e" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.474532 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b1961a-032d-40c5-83f7-602511b7808e" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.476341 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.496389 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v97xn"] Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.570584 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a28c94-45bd-48a9-90e2-f0ed03405516-catalog-content\") pod \"redhat-marketplace-v97xn\" (UID: \"07a28c94-45bd-48a9-90e2-f0ed03405516\") " pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.570851 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a28c94-45bd-48a9-90e2-f0ed03405516-utilities\") pod \"redhat-marketplace-v97xn\" (UID: \"07a28c94-45bd-48a9-90e2-f0ed03405516\") " pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.570910 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp7v8\" (UniqueName: \"kubernetes.io/projected/07a28c94-45bd-48a9-90e2-f0ed03405516-kube-api-access-hp7v8\") pod \"redhat-marketplace-v97xn\" (UID: \"07a28c94-45bd-48a9-90e2-f0ed03405516\") " pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.673276 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp7v8\" (UniqueName: \"kubernetes.io/projected/07a28c94-45bd-48a9-90e2-f0ed03405516-kube-api-access-hp7v8\") pod \"redhat-marketplace-v97xn\" (UID: \"07a28c94-45bd-48a9-90e2-f0ed03405516\") " pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.673419 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a28c94-45bd-48a9-90e2-f0ed03405516-catalog-content\") pod \"redhat-marketplace-v97xn\" (UID: \"07a28c94-45bd-48a9-90e2-f0ed03405516\") " pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.673536 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a28c94-45bd-48a9-90e2-f0ed03405516-utilities\") pod \"redhat-marketplace-v97xn\" (UID: \"07a28c94-45bd-48a9-90e2-f0ed03405516\") " pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.673991 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a28c94-45bd-48a9-90e2-f0ed03405516-catalog-content\") pod \"redhat-marketplace-v97xn\" (UID: \"07a28c94-45bd-48a9-90e2-f0ed03405516\") " pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.674019 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a28c94-45bd-48a9-90e2-f0ed03405516-utilities\") pod \"redhat-marketplace-v97xn\" (UID: \"07a28c94-45bd-48a9-90e2-f0ed03405516\") " pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.692755 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp7v8\" (UniqueName: \"kubernetes.io/projected/07a28c94-45bd-48a9-90e2-f0ed03405516-kube-api-access-hp7v8\") pod \"redhat-marketplace-v97xn\" (UID: \"07a28c94-45bd-48a9-90e2-f0ed03405516\") " pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:05 crc kubenswrapper[4675]: I1121 14:29:05.807256 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:06 crc kubenswrapper[4675]: I1121 14:29:06.296717 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v97xn"] Nov 21 14:29:06 crc kubenswrapper[4675]: W1121 14:29:06.307059 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07a28c94_45bd_48a9_90e2_f0ed03405516.slice/crio-74ad220c0d0bd106880e7ab03b5055ddfa05fb3766cf36076955bcd3d7403b03 WatchSource:0}: Error finding container 74ad220c0d0bd106880e7ab03b5055ddfa05fb3766cf36076955bcd3d7403b03: Status 404 returned error can't find the container with id 74ad220c0d0bd106880e7ab03b5055ddfa05fb3766cf36076955bcd3d7403b03 Nov 21 14:29:07 crc kubenswrapper[4675]: I1121 14:29:07.328886 4675 generic.go:334] "Generic (PLEG): container finished" podID="07a28c94-45bd-48a9-90e2-f0ed03405516" containerID="95b910fb057b3e85879e12930b97c42202ce851e51c554463a64cbd248a3517d" exitCode=0 Nov 21 14:29:07 crc kubenswrapper[4675]: I1121 14:29:07.329347 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v97xn" event={"ID":"07a28c94-45bd-48a9-90e2-f0ed03405516","Type":"ContainerDied","Data":"95b910fb057b3e85879e12930b97c42202ce851e51c554463a64cbd248a3517d"} Nov 21 14:29:07 crc kubenswrapper[4675]: I1121 14:29:07.329388 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v97xn" event={"ID":"07a28c94-45bd-48a9-90e2-f0ed03405516","Type":"ContainerStarted","Data":"74ad220c0d0bd106880e7ab03b5055ddfa05fb3766cf36076955bcd3d7403b03"} Nov 21 14:29:09 crc kubenswrapper[4675]: I1121 14:29:09.359439 4675 generic.go:334] "Generic (PLEG): container finished" podID="07a28c94-45bd-48a9-90e2-f0ed03405516" containerID="012e355d2f45aada672e4f338c5698985325c5f1e7b2deea673c66cc1ed1fbf7" exitCode=0 Nov 21 14:29:09 crc kubenswrapper[4675]: I1121 14:29:09.359568 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v97xn" event={"ID":"07a28c94-45bd-48a9-90e2-f0ed03405516","Type":"ContainerDied","Data":"012e355d2f45aada672e4f338c5698985325c5f1e7b2deea673c66cc1ed1fbf7"} Nov 21 14:29:10 crc kubenswrapper[4675]: I1121 14:29:10.373291 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v97xn" event={"ID":"07a28c94-45bd-48a9-90e2-f0ed03405516","Type":"ContainerStarted","Data":"cc54c5290a8a4ee4b433bbe67810b5ee068f730a8e273026eebd967336ac5312"} Nov 21 14:29:10 crc kubenswrapper[4675]: I1121 14:29:10.401724 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v97xn" podStartSLOduration=2.715931512 podStartE2EDuration="5.401701756s" podCreationTimestamp="2025-11-21 14:29:05 +0000 UTC" firstStartedPulling="2025-11-21 14:29:07.3334446 +0000 UTC m=+3424.059859367" lastFinishedPulling="2025-11-21 14:29:10.019214884 +0000 UTC m=+3426.745629611" observedRunningTime="2025-11-21 14:29:10.395155191 +0000 UTC m=+3427.121569948" watchObservedRunningTime="2025-11-21 14:29:10.401701756 +0000 UTC m=+3427.128116493" Nov 21 14:29:15 crc kubenswrapper[4675]: I1121 14:29:15.808030 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:15 crc kubenswrapper[4675]: I1121 14:29:15.808728 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:15 crc kubenswrapper[4675]: I1121 14:29:15.884970 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:16 crc kubenswrapper[4675]: I1121 14:29:16.517806 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:16 crc kubenswrapper[4675]: I1121 14:29:16.575512 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v97xn"] Nov 21 14:29:18 crc kubenswrapper[4675]: I1121 14:29:18.469334 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v97xn" podUID="07a28c94-45bd-48a9-90e2-f0ed03405516" containerName="registry-server" containerID="cri-o://cc54c5290a8a4ee4b433bbe67810b5ee068f730a8e273026eebd967336ac5312" gracePeriod=2 Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.015440 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.153308 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp7v8\" (UniqueName: \"kubernetes.io/projected/07a28c94-45bd-48a9-90e2-f0ed03405516-kube-api-access-hp7v8\") pod \"07a28c94-45bd-48a9-90e2-f0ed03405516\" (UID: \"07a28c94-45bd-48a9-90e2-f0ed03405516\") " Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.153402 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a28c94-45bd-48a9-90e2-f0ed03405516-utilities\") pod \"07a28c94-45bd-48a9-90e2-f0ed03405516\" (UID: \"07a28c94-45bd-48a9-90e2-f0ed03405516\") " Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.153449 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a28c94-45bd-48a9-90e2-f0ed03405516-catalog-content\") pod \"07a28c94-45bd-48a9-90e2-f0ed03405516\" (UID: \"07a28c94-45bd-48a9-90e2-f0ed03405516\") " Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.155359 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a28c94-45bd-48a9-90e2-f0ed03405516-utilities" (OuterVolumeSpecName: "utilities") pod "07a28c94-45bd-48a9-90e2-f0ed03405516" (UID: "07a28c94-45bd-48a9-90e2-f0ed03405516"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.158778 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a28c94-45bd-48a9-90e2-f0ed03405516-kube-api-access-hp7v8" (OuterVolumeSpecName: "kube-api-access-hp7v8") pod "07a28c94-45bd-48a9-90e2-f0ed03405516" (UID: "07a28c94-45bd-48a9-90e2-f0ed03405516"). InnerVolumeSpecName "kube-api-access-hp7v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.171771 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a28c94-45bd-48a9-90e2-f0ed03405516-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07a28c94-45bd-48a9-90e2-f0ed03405516" (UID: "07a28c94-45bd-48a9-90e2-f0ed03405516"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.255960 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp7v8\" (UniqueName: \"kubernetes.io/projected/07a28c94-45bd-48a9-90e2-f0ed03405516-kube-api-access-hp7v8\") on node \"crc\" DevicePath \"\"" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.256788 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a28c94-45bd-48a9-90e2-f0ed03405516-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.256897 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a28c94-45bd-48a9-90e2-f0ed03405516-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.497198 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v97xn" event={"ID":"07a28c94-45bd-48a9-90e2-f0ed03405516","Type":"ContainerDied","Data":"cc54c5290a8a4ee4b433bbe67810b5ee068f730a8e273026eebd967336ac5312"} Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.497266 4675 scope.go:117] "RemoveContainer" containerID="cc54c5290a8a4ee4b433bbe67810b5ee068f730a8e273026eebd967336ac5312" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.497211 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v97xn" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.501600 4675 generic.go:334] "Generic (PLEG): container finished" podID="07a28c94-45bd-48a9-90e2-f0ed03405516" containerID="cc54c5290a8a4ee4b433bbe67810b5ee068f730a8e273026eebd967336ac5312" exitCode=0 Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.501658 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v97xn" event={"ID":"07a28c94-45bd-48a9-90e2-f0ed03405516","Type":"ContainerDied","Data":"74ad220c0d0bd106880e7ab03b5055ddfa05fb3766cf36076955bcd3d7403b03"} Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.541194 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v97xn"] Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.546798 4675 scope.go:117] "RemoveContainer" containerID="012e355d2f45aada672e4f338c5698985325c5f1e7b2deea673c66cc1ed1fbf7" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.559439 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v97xn"] Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.568112 4675 scope.go:117] "RemoveContainer" containerID="95b910fb057b3e85879e12930b97c42202ce851e51c554463a64cbd248a3517d" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.642311 4675 scope.go:117] "RemoveContainer" containerID="cc54c5290a8a4ee4b433bbe67810b5ee068f730a8e273026eebd967336ac5312" Nov 21 14:29:19 crc kubenswrapper[4675]: E1121 14:29:19.643019 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc54c5290a8a4ee4b433bbe67810b5ee068f730a8e273026eebd967336ac5312\": container with ID starting with cc54c5290a8a4ee4b433bbe67810b5ee068f730a8e273026eebd967336ac5312 not found: ID does not exist" containerID="cc54c5290a8a4ee4b433bbe67810b5ee068f730a8e273026eebd967336ac5312" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.643189 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc54c5290a8a4ee4b433bbe67810b5ee068f730a8e273026eebd967336ac5312"} err="failed to get container status \"cc54c5290a8a4ee4b433bbe67810b5ee068f730a8e273026eebd967336ac5312\": rpc error: code = NotFound desc = could not find container \"cc54c5290a8a4ee4b433bbe67810b5ee068f730a8e273026eebd967336ac5312\": container with ID starting with cc54c5290a8a4ee4b433bbe67810b5ee068f730a8e273026eebd967336ac5312 not found: ID does not exist" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.643223 4675 scope.go:117] "RemoveContainer" containerID="012e355d2f45aada672e4f338c5698985325c5f1e7b2deea673c66cc1ed1fbf7" Nov 21 14:29:19 crc kubenswrapper[4675]: E1121 14:29:19.643593 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"012e355d2f45aada672e4f338c5698985325c5f1e7b2deea673c66cc1ed1fbf7\": container with ID starting with 012e355d2f45aada672e4f338c5698985325c5f1e7b2deea673c66cc1ed1fbf7 not found: ID does not exist" containerID="012e355d2f45aada672e4f338c5698985325c5f1e7b2deea673c66cc1ed1fbf7" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.643628 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"012e355d2f45aada672e4f338c5698985325c5f1e7b2deea673c66cc1ed1fbf7"} err="failed to get container status \"012e355d2f45aada672e4f338c5698985325c5f1e7b2deea673c66cc1ed1fbf7\": rpc error: code = NotFound desc = could not find container \"012e355d2f45aada672e4f338c5698985325c5f1e7b2deea673c66cc1ed1fbf7\": container with ID starting with 012e355d2f45aada672e4f338c5698985325c5f1e7b2deea673c66cc1ed1fbf7 not found: ID does not exist" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.643653 4675 scope.go:117] "RemoveContainer" containerID="95b910fb057b3e85879e12930b97c42202ce851e51c554463a64cbd248a3517d" Nov 21 14:29:19 crc kubenswrapper[4675]: E1121 14:29:19.644017 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95b910fb057b3e85879e12930b97c42202ce851e51c554463a64cbd248a3517d\": container with ID starting with 95b910fb057b3e85879e12930b97c42202ce851e51c554463a64cbd248a3517d not found: ID does not exist" containerID="95b910fb057b3e85879e12930b97c42202ce851e51c554463a64cbd248a3517d" Nov 21 14:29:19 crc kubenswrapper[4675]: I1121 14:29:19.644052 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95b910fb057b3e85879e12930b97c42202ce851e51c554463a64cbd248a3517d"} err="failed to get container status \"95b910fb057b3e85879e12930b97c42202ce851e51c554463a64cbd248a3517d\": rpc error: code = NotFound desc = could not find container \"95b910fb057b3e85879e12930b97c42202ce851e51c554463a64cbd248a3517d\": container with ID starting with 95b910fb057b3e85879e12930b97c42202ce851e51c554463a64cbd248a3517d not found: ID does not exist" Nov 21 14:29:20 crc kubenswrapper[4675]: I1121 14:29:20.862377 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07a28c94-45bd-48a9-90e2-f0ed03405516" path="/var/lib/kubelet/pods/07a28c94-45bd-48a9-90e2-f0ed03405516/volumes" Nov 21 14:29:46 crc kubenswrapper[4675]: I1121 14:29:46.136376 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:29:46 crc kubenswrapper[4675]: I1121 14:29:46.137111 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.179400 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4"] Nov 21 14:30:00 crc kubenswrapper[4675]: E1121 14:30:00.180766 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a28c94-45bd-48a9-90e2-f0ed03405516" containerName="registry-server" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.180785 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a28c94-45bd-48a9-90e2-f0ed03405516" containerName="registry-server" Nov 21 14:30:00 crc kubenswrapper[4675]: E1121 14:30:00.180860 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a28c94-45bd-48a9-90e2-f0ed03405516" containerName="extract-utilities" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.180871 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a28c94-45bd-48a9-90e2-f0ed03405516" containerName="extract-utilities" Nov 21 14:30:00 crc kubenswrapper[4675]: E1121 14:30:00.180900 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a28c94-45bd-48a9-90e2-f0ed03405516" containerName="extract-content" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.180910 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a28c94-45bd-48a9-90e2-f0ed03405516" containerName="extract-content" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.181282 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a28c94-45bd-48a9-90e2-f0ed03405516" containerName="registry-server" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.182566 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.185600 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.186813 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.187663 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4"] Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.282741 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/032d0414-27e3-499b-9259-a2ef3a97083d-config-volume\") pod \"collect-profiles-29395590-prxr4\" (UID: \"032d0414-27e3-499b-9259-a2ef3a97083d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.282864 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/032d0414-27e3-499b-9259-a2ef3a97083d-secret-volume\") pod \"collect-profiles-29395590-prxr4\" (UID: \"032d0414-27e3-499b-9259-a2ef3a97083d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.283143 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hj9x\" (UniqueName: \"kubernetes.io/projected/032d0414-27e3-499b-9259-a2ef3a97083d-kube-api-access-4hj9x\") pod \"collect-profiles-29395590-prxr4\" (UID: \"032d0414-27e3-499b-9259-a2ef3a97083d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.384903 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/032d0414-27e3-499b-9259-a2ef3a97083d-config-volume\") pod \"collect-profiles-29395590-prxr4\" (UID: \"032d0414-27e3-499b-9259-a2ef3a97083d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.384983 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/032d0414-27e3-499b-9259-a2ef3a97083d-secret-volume\") pod \"collect-profiles-29395590-prxr4\" (UID: \"032d0414-27e3-499b-9259-a2ef3a97083d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.385103 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hj9x\" (UniqueName: \"kubernetes.io/projected/032d0414-27e3-499b-9259-a2ef3a97083d-kube-api-access-4hj9x\") pod \"collect-profiles-29395590-prxr4\" (UID: \"032d0414-27e3-499b-9259-a2ef3a97083d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.385795 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/032d0414-27e3-499b-9259-a2ef3a97083d-config-volume\") pod \"collect-profiles-29395590-prxr4\" (UID: \"032d0414-27e3-499b-9259-a2ef3a97083d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.391769 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/032d0414-27e3-499b-9259-a2ef3a97083d-secret-volume\") pod \"collect-profiles-29395590-prxr4\" (UID: \"032d0414-27e3-499b-9259-a2ef3a97083d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.401910 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hj9x\" (UniqueName: \"kubernetes.io/projected/032d0414-27e3-499b-9259-a2ef3a97083d-kube-api-access-4hj9x\") pod \"collect-profiles-29395590-prxr4\" (UID: \"032d0414-27e3-499b-9259-a2ef3a97083d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.509247 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" Nov 21 14:30:00 crc kubenswrapper[4675]: I1121 14:30:00.963432 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4"] Nov 21 14:30:01 crc kubenswrapper[4675]: I1121 14:30:01.014770 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" event={"ID":"032d0414-27e3-499b-9259-a2ef3a97083d","Type":"ContainerStarted","Data":"27a4206d058b5e1897f2107ca2c70ea77909399ba272cbcf8a0a9e27b275fb2d"} Nov 21 14:30:02 crc kubenswrapper[4675]: I1121 14:30:02.031305 4675 generic.go:334] "Generic (PLEG): container finished" podID="032d0414-27e3-499b-9259-a2ef3a97083d" containerID="7188a339f118555cea44ffb7783754d8713f93cc4dff705e6d6d4c8b0b841bdb" exitCode=0 Nov 21 14:30:02 crc kubenswrapper[4675]: I1121 14:30:02.031380 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" event={"ID":"032d0414-27e3-499b-9259-a2ef3a97083d","Type":"ContainerDied","Data":"7188a339f118555cea44ffb7783754d8713f93cc4dff705e6d6d4c8b0b841bdb"} Nov 21 14:30:03 crc kubenswrapper[4675]: I1121 14:30:03.446871 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" Nov 21 14:30:03 crc kubenswrapper[4675]: I1121 14:30:03.596550 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/032d0414-27e3-499b-9259-a2ef3a97083d-config-volume\") pod \"032d0414-27e3-499b-9259-a2ef3a97083d\" (UID: \"032d0414-27e3-499b-9259-a2ef3a97083d\") " Nov 21 14:30:03 crc kubenswrapper[4675]: I1121 14:30:03.596793 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/032d0414-27e3-499b-9259-a2ef3a97083d-secret-volume\") pod \"032d0414-27e3-499b-9259-a2ef3a97083d\" (UID: \"032d0414-27e3-499b-9259-a2ef3a97083d\") " Nov 21 14:30:03 crc kubenswrapper[4675]: I1121 14:30:03.596917 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hj9x\" (UniqueName: \"kubernetes.io/projected/032d0414-27e3-499b-9259-a2ef3a97083d-kube-api-access-4hj9x\") pod \"032d0414-27e3-499b-9259-a2ef3a97083d\" (UID: \"032d0414-27e3-499b-9259-a2ef3a97083d\") " Nov 21 14:30:03 crc kubenswrapper[4675]: I1121 14:30:03.597492 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/032d0414-27e3-499b-9259-a2ef3a97083d-config-volume" (OuterVolumeSpecName: "config-volume") pod "032d0414-27e3-499b-9259-a2ef3a97083d" (UID: "032d0414-27e3-499b-9259-a2ef3a97083d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:30:03 crc kubenswrapper[4675]: I1121 14:30:03.597902 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/032d0414-27e3-499b-9259-a2ef3a97083d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:30:03 crc kubenswrapper[4675]: I1121 14:30:03.603103 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/032d0414-27e3-499b-9259-a2ef3a97083d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "032d0414-27e3-499b-9259-a2ef3a97083d" (UID: "032d0414-27e3-499b-9259-a2ef3a97083d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:30:03 crc kubenswrapper[4675]: I1121 14:30:03.603388 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/032d0414-27e3-499b-9259-a2ef3a97083d-kube-api-access-4hj9x" (OuterVolumeSpecName: "kube-api-access-4hj9x") pod "032d0414-27e3-499b-9259-a2ef3a97083d" (UID: "032d0414-27e3-499b-9259-a2ef3a97083d"). InnerVolumeSpecName "kube-api-access-4hj9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:30:03 crc kubenswrapper[4675]: I1121 14:30:03.700750 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hj9x\" (UniqueName: \"kubernetes.io/projected/032d0414-27e3-499b-9259-a2ef3a97083d-kube-api-access-4hj9x\") on node \"crc\" DevicePath \"\"" Nov 21 14:30:03 crc kubenswrapper[4675]: I1121 14:30:03.700799 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/032d0414-27e3-499b-9259-a2ef3a97083d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:30:04 crc kubenswrapper[4675]: I1121 14:30:04.060994 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" event={"ID":"032d0414-27e3-499b-9259-a2ef3a97083d","Type":"ContainerDied","Data":"27a4206d058b5e1897f2107ca2c70ea77909399ba272cbcf8a0a9e27b275fb2d"} Nov 21 14:30:04 crc kubenswrapper[4675]: I1121 14:30:04.061372 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27a4206d058b5e1897f2107ca2c70ea77909399ba272cbcf8a0a9e27b275fb2d" Nov 21 14:30:04 crc kubenswrapper[4675]: I1121 14:30:04.061057 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4" Nov 21 14:30:04 crc kubenswrapper[4675]: I1121 14:30:04.539404 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4"] Nov 21 14:30:04 crc kubenswrapper[4675]: I1121 14:30:04.552366 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395545-xqzt4"] Nov 21 14:30:04 crc kubenswrapper[4675]: I1121 14:30:04.866424 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67814e68-91a8-44c3-801d-77bfa9ffc9b0" path="/var/lib/kubelet/pods/67814e68-91a8-44c3-801d-77bfa9ffc9b0/volumes" Nov 21 14:30:16 crc kubenswrapper[4675]: I1121 14:30:16.136771 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:30:16 crc kubenswrapper[4675]: I1121 14:30:16.138060 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:30:26 crc kubenswrapper[4675]: I1121 14:30:26.709172 4675 scope.go:117] "RemoveContainer" containerID="4d45fc3cc395872483f5b3858554947237c0f85e2268c0cb2c79da7473875041" Nov 21 14:30:46 crc kubenswrapper[4675]: I1121 14:30:46.136949 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:30:46 crc kubenswrapper[4675]: I1121 14:30:46.137654 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:30:46 crc kubenswrapper[4675]: I1121 14:30:46.137716 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 14:30:46 crc kubenswrapper[4675]: I1121 14:30:46.139094 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7be1ca3661b800d7e1be3df35e623bd7dc17c97289051a01313c20529119f4c"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:30:46 crc kubenswrapper[4675]: I1121 14:30:46.139154 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://b7be1ca3661b800d7e1be3df35e623bd7dc17c97289051a01313c20529119f4c" gracePeriod=600 Nov 21 14:30:46 crc kubenswrapper[4675]: I1121 14:30:46.627911 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="b7be1ca3661b800d7e1be3df35e623bd7dc17c97289051a01313c20529119f4c" exitCode=0 Nov 21 14:30:46 crc kubenswrapper[4675]: I1121 14:30:46.628035 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"b7be1ca3661b800d7e1be3df35e623bd7dc17c97289051a01313c20529119f4c"} Nov 21 14:30:46 crc kubenswrapper[4675]: I1121 14:30:46.628460 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c"} Nov 21 14:30:46 crc kubenswrapper[4675]: I1121 14:30:46.628496 4675 scope.go:117] "RemoveContainer" containerID="fa269d427a790bcd997d58471b6b360b53928db9e2e306e4c7e1cc9ce64a8399" Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.195570 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wstbg"] Nov 21 14:31:13 crc kubenswrapper[4675]: E1121 14:31:13.196906 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="032d0414-27e3-499b-9259-a2ef3a97083d" containerName="collect-profiles" Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.196928 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="032d0414-27e3-499b-9259-a2ef3a97083d" containerName="collect-profiles" Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.197248 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="032d0414-27e3-499b-9259-a2ef3a97083d" containerName="collect-profiles" Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.199549 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.211420 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wstbg"] Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.352849 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtwzg\" (UniqueName: \"kubernetes.io/projected/eb967da4-c38f-471f-b454-06e40a7a40da-kube-api-access-wtwzg\") pod \"community-operators-wstbg\" (UID: \"eb967da4-c38f-471f-b454-06e40a7a40da\") " pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.353110 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb967da4-c38f-471f-b454-06e40a7a40da-catalog-content\") pod \"community-operators-wstbg\" (UID: \"eb967da4-c38f-471f-b454-06e40a7a40da\") " pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.353242 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb967da4-c38f-471f-b454-06e40a7a40da-utilities\") pod \"community-operators-wstbg\" (UID: \"eb967da4-c38f-471f-b454-06e40a7a40da\") " pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.455529 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtwzg\" (UniqueName: \"kubernetes.io/projected/eb967da4-c38f-471f-b454-06e40a7a40da-kube-api-access-wtwzg\") pod \"community-operators-wstbg\" (UID: \"eb967da4-c38f-471f-b454-06e40a7a40da\") " pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.455596 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb967da4-c38f-471f-b454-06e40a7a40da-catalog-content\") pod \"community-operators-wstbg\" (UID: \"eb967da4-c38f-471f-b454-06e40a7a40da\") " pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.455631 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb967da4-c38f-471f-b454-06e40a7a40da-utilities\") pod \"community-operators-wstbg\" (UID: \"eb967da4-c38f-471f-b454-06e40a7a40da\") " pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.456166 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb967da4-c38f-471f-b454-06e40a7a40da-utilities\") pod \"community-operators-wstbg\" (UID: \"eb967da4-c38f-471f-b454-06e40a7a40da\") " pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.456200 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb967da4-c38f-471f-b454-06e40a7a40da-catalog-content\") pod \"community-operators-wstbg\" (UID: \"eb967da4-c38f-471f-b454-06e40a7a40da\") " pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.481985 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtwzg\" (UniqueName: \"kubernetes.io/projected/eb967da4-c38f-471f-b454-06e40a7a40da-kube-api-access-wtwzg\") pod \"community-operators-wstbg\" (UID: \"eb967da4-c38f-471f-b454-06e40a7a40da\") " pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:13 crc kubenswrapper[4675]: I1121 14:31:13.527905 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:14 crc kubenswrapper[4675]: I1121 14:31:14.132421 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wstbg"] Nov 21 14:31:15 crc kubenswrapper[4675]: I1121 14:31:15.063918 4675 generic.go:334] "Generic (PLEG): container finished" podID="eb967da4-c38f-471f-b454-06e40a7a40da" containerID="5a7d1bb864af32ff040ecaf4e602dca5ca19fae3b7141ac137ba1d0db985f5e8" exitCode=0 Nov 21 14:31:15 crc kubenswrapper[4675]: I1121 14:31:15.064053 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wstbg" event={"ID":"eb967da4-c38f-471f-b454-06e40a7a40da","Type":"ContainerDied","Data":"5a7d1bb864af32ff040ecaf4e602dca5ca19fae3b7141ac137ba1d0db985f5e8"} Nov 21 14:31:15 crc kubenswrapper[4675]: I1121 14:31:15.064273 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wstbg" event={"ID":"eb967da4-c38f-471f-b454-06e40a7a40da","Type":"ContainerStarted","Data":"90f482c0d1bf0462ed56ac542416d021a92821cfdce94c240e5c8374e0af66ea"} Nov 21 14:31:15 crc kubenswrapper[4675]: I1121 14:31:15.068832 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:31:17 crc kubenswrapper[4675]: I1121 14:31:17.087996 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wstbg" event={"ID":"eb967da4-c38f-471f-b454-06e40a7a40da","Type":"ContainerStarted","Data":"e106cfb4a1534a497e7bef699318310dc31d07427a95347cd1d08f9b32bc97e9"} Nov 21 14:31:19 crc kubenswrapper[4675]: I1121 14:31:19.124303 4675 generic.go:334] "Generic (PLEG): container finished" podID="eb967da4-c38f-471f-b454-06e40a7a40da" containerID="e106cfb4a1534a497e7bef699318310dc31d07427a95347cd1d08f9b32bc97e9" exitCode=0 Nov 21 14:31:19 crc kubenswrapper[4675]: I1121 14:31:19.124495 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wstbg" event={"ID":"eb967da4-c38f-471f-b454-06e40a7a40da","Type":"ContainerDied","Data":"e106cfb4a1534a497e7bef699318310dc31d07427a95347cd1d08f9b32bc97e9"} Nov 21 14:31:20 crc kubenswrapper[4675]: I1121 14:31:20.138817 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wstbg" event={"ID":"eb967da4-c38f-471f-b454-06e40a7a40da","Type":"ContainerStarted","Data":"81f620e8b63e17646e945602e9e38aafd0d50f7e718fdd277f1a86ea98ec455a"} Nov 21 14:31:20 crc kubenswrapper[4675]: I1121 14:31:20.161886 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wstbg" podStartSLOduration=2.658779996 podStartE2EDuration="7.16186986s" podCreationTimestamp="2025-11-21 14:31:13 +0000 UTC" firstStartedPulling="2025-11-21 14:31:15.068552621 +0000 UTC m=+3551.794967348" lastFinishedPulling="2025-11-21 14:31:19.571642485 +0000 UTC m=+3556.298057212" observedRunningTime="2025-11-21 14:31:20.161204443 +0000 UTC m=+3556.887619170" watchObservedRunningTime="2025-11-21 14:31:20.16186986 +0000 UTC m=+3556.888284587" Nov 21 14:31:23 crc kubenswrapper[4675]: I1121 14:31:23.528167 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:23 crc kubenswrapper[4675]: I1121 14:31:23.528765 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:23 crc kubenswrapper[4675]: I1121 14:31:23.578704 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:24 crc kubenswrapper[4675]: I1121 14:31:24.262593 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:24 crc kubenswrapper[4675]: I1121 14:31:24.311176 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wstbg"] Nov 21 14:31:26 crc kubenswrapper[4675]: I1121 14:31:26.226908 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wstbg" podUID="eb967da4-c38f-471f-b454-06e40a7a40da" containerName="registry-server" containerID="cri-o://81f620e8b63e17646e945602e9e38aafd0d50f7e718fdd277f1a86ea98ec455a" gracePeriod=2 Nov 21 14:31:26 crc kubenswrapper[4675]: I1121 14:31:26.770136 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:26 crc kubenswrapper[4675]: I1121 14:31:26.904466 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb967da4-c38f-471f-b454-06e40a7a40da-catalog-content\") pod \"eb967da4-c38f-471f-b454-06e40a7a40da\" (UID: \"eb967da4-c38f-471f-b454-06e40a7a40da\") " Nov 21 14:31:26 crc kubenswrapper[4675]: I1121 14:31:26.904638 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb967da4-c38f-471f-b454-06e40a7a40da-utilities\") pod \"eb967da4-c38f-471f-b454-06e40a7a40da\" (UID: \"eb967da4-c38f-471f-b454-06e40a7a40da\") " Nov 21 14:31:26 crc kubenswrapper[4675]: I1121 14:31:26.904749 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtwzg\" (UniqueName: \"kubernetes.io/projected/eb967da4-c38f-471f-b454-06e40a7a40da-kube-api-access-wtwzg\") pod \"eb967da4-c38f-471f-b454-06e40a7a40da\" (UID: \"eb967da4-c38f-471f-b454-06e40a7a40da\") " Nov 21 14:31:26 crc kubenswrapper[4675]: I1121 14:31:26.905667 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb967da4-c38f-471f-b454-06e40a7a40da-utilities" (OuterVolumeSpecName: "utilities") pod "eb967da4-c38f-471f-b454-06e40a7a40da" (UID: "eb967da4-c38f-471f-b454-06e40a7a40da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:31:26 crc kubenswrapper[4675]: I1121 14:31:26.911602 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb967da4-c38f-471f-b454-06e40a7a40da-kube-api-access-wtwzg" (OuterVolumeSpecName: "kube-api-access-wtwzg") pod "eb967da4-c38f-471f-b454-06e40a7a40da" (UID: "eb967da4-c38f-471f-b454-06e40a7a40da"). InnerVolumeSpecName "kube-api-access-wtwzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:31:26 crc kubenswrapper[4675]: I1121 14:31:26.958651 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb967da4-c38f-471f-b454-06e40a7a40da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb967da4-c38f-471f-b454-06e40a7a40da" (UID: "eb967da4-c38f-471f-b454-06e40a7a40da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.009040 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb967da4-c38f-471f-b454-06e40a7a40da-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.009105 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb967da4-c38f-471f-b454-06e40a7a40da-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.009120 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtwzg\" (UniqueName: \"kubernetes.io/projected/eb967da4-c38f-471f-b454-06e40a7a40da-kube-api-access-wtwzg\") on node \"crc\" DevicePath \"\"" Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.243709 4675 generic.go:334] "Generic (PLEG): container finished" podID="eb967da4-c38f-471f-b454-06e40a7a40da" containerID="81f620e8b63e17646e945602e9e38aafd0d50f7e718fdd277f1a86ea98ec455a" exitCode=0 Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.243809 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wstbg" Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.243817 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wstbg" event={"ID":"eb967da4-c38f-471f-b454-06e40a7a40da","Type":"ContainerDied","Data":"81f620e8b63e17646e945602e9e38aafd0d50f7e718fdd277f1a86ea98ec455a"} Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.244297 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wstbg" event={"ID":"eb967da4-c38f-471f-b454-06e40a7a40da","Type":"ContainerDied","Data":"90f482c0d1bf0462ed56ac542416d021a92821cfdce94c240e5c8374e0af66ea"} Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.244337 4675 scope.go:117] "RemoveContainer" containerID="81f620e8b63e17646e945602e9e38aafd0d50f7e718fdd277f1a86ea98ec455a" Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.271815 4675 scope.go:117] "RemoveContainer" containerID="e106cfb4a1534a497e7bef699318310dc31d07427a95347cd1d08f9b32bc97e9" Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.291993 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wstbg"] Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.305520 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wstbg"] Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.306715 4675 scope.go:117] "RemoveContainer" containerID="5a7d1bb864af32ff040ecaf4e602dca5ca19fae3b7141ac137ba1d0db985f5e8" Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.366304 4675 scope.go:117] "RemoveContainer" containerID="81f620e8b63e17646e945602e9e38aafd0d50f7e718fdd277f1a86ea98ec455a" Nov 21 14:31:27 crc kubenswrapper[4675]: E1121 14:31:27.366809 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81f620e8b63e17646e945602e9e38aafd0d50f7e718fdd277f1a86ea98ec455a\": container with ID starting with 81f620e8b63e17646e945602e9e38aafd0d50f7e718fdd277f1a86ea98ec455a not found: ID does not exist" containerID="81f620e8b63e17646e945602e9e38aafd0d50f7e718fdd277f1a86ea98ec455a" Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.366936 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81f620e8b63e17646e945602e9e38aafd0d50f7e718fdd277f1a86ea98ec455a"} err="failed to get container status \"81f620e8b63e17646e945602e9e38aafd0d50f7e718fdd277f1a86ea98ec455a\": rpc error: code = NotFound desc = could not find container \"81f620e8b63e17646e945602e9e38aafd0d50f7e718fdd277f1a86ea98ec455a\": container with ID starting with 81f620e8b63e17646e945602e9e38aafd0d50f7e718fdd277f1a86ea98ec455a not found: ID does not exist" Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.367059 4675 scope.go:117] "RemoveContainer" containerID="e106cfb4a1534a497e7bef699318310dc31d07427a95347cd1d08f9b32bc97e9" Nov 21 14:31:27 crc kubenswrapper[4675]: E1121 14:31:27.367604 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e106cfb4a1534a497e7bef699318310dc31d07427a95347cd1d08f9b32bc97e9\": container with ID starting with e106cfb4a1534a497e7bef699318310dc31d07427a95347cd1d08f9b32bc97e9 not found: ID does not exist" containerID="e106cfb4a1534a497e7bef699318310dc31d07427a95347cd1d08f9b32bc97e9" Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.367657 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e106cfb4a1534a497e7bef699318310dc31d07427a95347cd1d08f9b32bc97e9"} err="failed to get container status \"e106cfb4a1534a497e7bef699318310dc31d07427a95347cd1d08f9b32bc97e9\": rpc error: code = NotFound desc = could not find container \"e106cfb4a1534a497e7bef699318310dc31d07427a95347cd1d08f9b32bc97e9\": container with ID starting with e106cfb4a1534a497e7bef699318310dc31d07427a95347cd1d08f9b32bc97e9 not found: ID does not exist" Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.367700 4675 scope.go:117] "RemoveContainer" containerID="5a7d1bb864af32ff040ecaf4e602dca5ca19fae3b7141ac137ba1d0db985f5e8" Nov 21 14:31:27 crc kubenswrapper[4675]: E1121 14:31:27.368085 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a7d1bb864af32ff040ecaf4e602dca5ca19fae3b7141ac137ba1d0db985f5e8\": container with ID starting with 5a7d1bb864af32ff040ecaf4e602dca5ca19fae3b7141ac137ba1d0db985f5e8 not found: ID does not exist" containerID="5a7d1bb864af32ff040ecaf4e602dca5ca19fae3b7141ac137ba1d0db985f5e8" Nov 21 14:31:27 crc kubenswrapper[4675]: I1121 14:31:27.368206 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7d1bb864af32ff040ecaf4e602dca5ca19fae3b7141ac137ba1d0db985f5e8"} err="failed to get container status \"5a7d1bb864af32ff040ecaf4e602dca5ca19fae3b7141ac137ba1d0db985f5e8\": rpc error: code = NotFound desc = could not find container \"5a7d1bb864af32ff040ecaf4e602dca5ca19fae3b7141ac137ba1d0db985f5e8\": container with ID starting with 5a7d1bb864af32ff040ecaf4e602dca5ca19fae3b7141ac137ba1d0db985f5e8 not found: ID does not exist" Nov 21 14:31:28 crc kubenswrapper[4675]: I1121 14:31:28.864434 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb967da4-c38f-471f-b454-06e40a7a40da" path="/var/lib/kubelet/pods/eb967da4-c38f-471f-b454-06e40a7a40da/volumes" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.287429 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-drwkk"] Nov 21 14:31:52 crc kubenswrapper[4675]: E1121 14:31:52.288736 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb967da4-c38f-471f-b454-06e40a7a40da" containerName="registry-server" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.288799 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb967da4-c38f-471f-b454-06e40a7a40da" containerName="registry-server" Nov 21 14:31:52 crc kubenswrapper[4675]: E1121 14:31:52.288867 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb967da4-c38f-471f-b454-06e40a7a40da" containerName="extract-utilities" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.288876 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb967da4-c38f-471f-b454-06e40a7a40da" containerName="extract-utilities" Nov 21 14:31:52 crc kubenswrapper[4675]: E1121 14:31:52.288911 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb967da4-c38f-471f-b454-06e40a7a40da" containerName="extract-content" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.288919 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb967da4-c38f-471f-b454-06e40a7a40da" containerName="extract-content" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.289239 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb967da4-c38f-471f-b454-06e40a7a40da" containerName="registry-server" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.293550 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.300705 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drwkk"] Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.450333 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40a4965a-6418-47e7-8142-3dde1f2a41e7-utilities\") pod \"certified-operators-drwkk\" (UID: \"40a4965a-6418-47e7-8142-3dde1f2a41e7\") " pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.450551 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqzd8\" (UniqueName: \"kubernetes.io/projected/40a4965a-6418-47e7-8142-3dde1f2a41e7-kube-api-access-sqzd8\") pod \"certified-operators-drwkk\" (UID: \"40a4965a-6418-47e7-8142-3dde1f2a41e7\") " pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.450811 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40a4965a-6418-47e7-8142-3dde1f2a41e7-catalog-content\") pod \"certified-operators-drwkk\" (UID: \"40a4965a-6418-47e7-8142-3dde1f2a41e7\") " pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.552887 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqzd8\" (UniqueName: \"kubernetes.io/projected/40a4965a-6418-47e7-8142-3dde1f2a41e7-kube-api-access-sqzd8\") pod \"certified-operators-drwkk\" (UID: \"40a4965a-6418-47e7-8142-3dde1f2a41e7\") " pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.553097 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40a4965a-6418-47e7-8142-3dde1f2a41e7-catalog-content\") pod \"certified-operators-drwkk\" (UID: \"40a4965a-6418-47e7-8142-3dde1f2a41e7\") " pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.553135 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40a4965a-6418-47e7-8142-3dde1f2a41e7-utilities\") pod \"certified-operators-drwkk\" (UID: \"40a4965a-6418-47e7-8142-3dde1f2a41e7\") " pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.553745 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40a4965a-6418-47e7-8142-3dde1f2a41e7-utilities\") pod \"certified-operators-drwkk\" (UID: \"40a4965a-6418-47e7-8142-3dde1f2a41e7\") " pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.553795 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40a4965a-6418-47e7-8142-3dde1f2a41e7-catalog-content\") pod \"certified-operators-drwkk\" (UID: \"40a4965a-6418-47e7-8142-3dde1f2a41e7\") " pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.575989 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqzd8\" (UniqueName: \"kubernetes.io/projected/40a4965a-6418-47e7-8142-3dde1f2a41e7-kube-api-access-sqzd8\") pod \"certified-operators-drwkk\" (UID: \"40a4965a-6418-47e7-8142-3dde1f2a41e7\") " pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:31:52 crc kubenswrapper[4675]: I1121 14:31:52.619802 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:31:53 crc kubenswrapper[4675]: I1121 14:31:53.217437 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drwkk"] Nov 21 14:31:53 crc kubenswrapper[4675]: I1121 14:31:53.568024 4675 generic.go:334] "Generic (PLEG): container finished" podID="40a4965a-6418-47e7-8142-3dde1f2a41e7" containerID="908058163aca56955025646a3e3aae8abc22a311f6a3053015ecfdf03b99deb0" exitCode=0 Nov 21 14:31:53 crc kubenswrapper[4675]: I1121 14:31:53.568107 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwkk" event={"ID":"40a4965a-6418-47e7-8142-3dde1f2a41e7","Type":"ContainerDied","Data":"908058163aca56955025646a3e3aae8abc22a311f6a3053015ecfdf03b99deb0"} Nov 21 14:31:53 crc kubenswrapper[4675]: I1121 14:31:53.568140 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwkk" event={"ID":"40a4965a-6418-47e7-8142-3dde1f2a41e7","Type":"ContainerStarted","Data":"f1e64c4532381a49dcb0e1dddd0cf818271648490b0d3b6b863426c230b7fc00"} Nov 21 14:31:54 crc kubenswrapper[4675]: I1121 14:31:54.587005 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwkk" event={"ID":"40a4965a-6418-47e7-8142-3dde1f2a41e7","Type":"ContainerStarted","Data":"1b6b037b3ceb0afc9756a927918ca7629f92f65f0efeea5c3ac8c3db7bc1e87c"} Nov 21 14:31:56 crc kubenswrapper[4675]: I1121 14:31:56.609105 4675 generic.go:334] "Generic (PLEG): container finished" podID="40a4965a-6418-47e7-8142-3dde1f2a41e7" containerID="1b6b037b3ceb0afc9756a927918ca7629f92f65f0efeea5c3ac8c3db7bc1e87c" exitCode=0 Nov 21 14:31:56 crc kubenswrapper[4675]: I1121 14:31:56.609255 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwkk" event={"ID":"40a4965a-6418-47e7-8142-3dde1f2a41e7","Type":"ContainerDied","Data":"1b6b037b3ceb0afc9756a927918ca7629f92f65f0efeea5c3ac8c3db7bc1e87c"} Nov 21 14:31:57 crc kubenswrapper[4675]: I1121 14:31:57.631040 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwkk" event={"ID":"40a4965a-6418-47e7-8142-3dde1f2a41e7","Type":"ContainerStarted","Data":"6e326c935c1b5ce80dc27a4587efe37df7369fb259ed2c7b2266ccc83efe5c9b"} Nov 21 14:31:57 crc kubenswrapper[4675]: I1121 14:31:57.654917 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-drwkk" podStartSLOduration=2.19477531 podStartE2EDuration="5.654901175s" podCreationTimestamp="2025-11-21 14:31:52 +0000 UTC" firstStartedPulling="2025-11-21 14:31:53.57021468 +0000 UTC m=+3590.296629407" lastFinishedPulling="2025-11-21 14:31:57.030340535 +0000 UTC m=+3593.756755272" observedRunningTime="2025-11-21 14:31:57.649778596 +0000 UTC m=+3594.376193323" watchObservedRunningTime="2025-11-21 14:31:57.654901175 +0000 UTC m=+3594.381315902" Nov 21 14:32:02 crc kubenswrapper[4675]: I1121 14:32:02.621024 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:32:02 crc kubenswrapper[4675]: I1121 14:32:02.622211 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:32:02 crc kubenswrapper[4675]: I1121 14:32:02.734844 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:32:03 crc kubenswrapper[4675]: I1121 14:32:03.768907 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:32:03 crc kubenswrapper[4675]: I1121 14:32:03.842178 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drwkk"] Nov 21 14:32:05 crc kubenswrapper[4675]: I1121 14:32:05.740424 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-drwkk" podUID="40a4965a-6418-47e7-8142-3dde1f2a41e7" containerName="registry-server" containerID="cri-o://6e326c935c1b5ce80dc27a4587efe37df7369fb259ed2c7b2266ccc83efe5c9b" gracePeriod=2 Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.286289 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.393366 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40a4965a-6418-47e7-8142-3dde1f2a41e7-catalog-content\") pod \"40a4965a-6418-47e7-8142-3dde1f2a41e7\" (UID: \"40a4965a-6418-47e7-8142-3dde1f2a41e7\") " Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.393582 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40a4965a-6418-47e7-8142-3dde1f2a41e7-utilities\") pod \"40a4965a-6418-47e7-8142-3dde1f2a41e7\" (UID: \"40a4965a-6418-47e7-8142-3dde1f2a41e7\") " Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.393672 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqzd8\" (UniqueName: \"kubernetes.io/projected/40a4965a-6418-47e7-8142-3dde1f2a41e7-kube-api-access-sqzd8\") pod \"40a4965a-6418-47e7-8142-3dde1f2a41e7\" (UID: \"40a4965a-6418-47e7-8142-3dde1f2a41e7\") " Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.394453 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40a4965a-6418-47e7-8142-3dde1f2a41e7-utilities" (OuterVolumeSpecName: "utilities") pod "40a4965a-6418-47e7-8142-3dde1f2a41e7" (UID: "40a4965a-6418-47e7-8142-3dde1f2a41e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.400140 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40a4965a-6418-47e7-8142-3dde1f2a41e7-kube-api-access-sqzd8" (OuterVolumeSpecName: "kube-api-access-sqzd8") pod "40a4965a-6418-47e7-8142-3dde1f2a41e7" (UID: "40a4965a-6418-47e7-8142-3dde1f2a41e7"). InnerVolumeSpecName "kube-api-access-sqzd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.441925 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40a4965a-6418-47e7-8142-3dde1f2a41e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40a4965a-6418-47e7-8142-3dde1f2a41e7" (UID: "40a4965a-6418-47e7-8142-3dde1f2a41e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.496919 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40a4965a-6418-47e7-8142-3dde1f2a41e7-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.497445 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqzd8\" (UniqueName: \"kubernetes.io/projected/40a4965a-6418-47e7-8142-3dde1f2a41e7-kube-api-access-sqzd8\") on node \"crc\" DevicePath \"\"" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.497537 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40a4965a-6418-47e7-8142-3dde1f2a41e7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.752761 4675 generic.go:334] "Generic (PLEG): container finished" podID="40a4965a-6418-47e7-8142-3dde1f2a41e7" containerID="6e326c935c1b5ce80dc27a4587efe37df7369fb259ed2c7b2266ccc83efe5c9b" exitCode=0 Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.752830 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drwkk" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.752837 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwkk" event={"ID":"40a4965a-6418-47e7-8142-3dde1f2a41e7","Type":"ContainerDied","Data":"6e326c935c1b5ce80dc27a4587efe37df7369fb259ed2c7b2266ccc83efe5c9b"} Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.754398 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwkk" event={"ID":"40a4965a-6418-47e7-8142-3dde1f2a41e7","Type":"ContainerDied","Data":"f1e64c4532381a49dcb0e1dddd0cf818271648490b0d3b6b863426c230b7fc00"} Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.754438 4675 scope.go:117] "RemoveContainer" containerID="6e326c935c1b5ce80dc27a4587efe37df7369fb259ed2c7b2266ccc83efe5c9b" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.803308 4675 scope.go:117] "RemoveContainer" containerID="1b6b037b3ceb0afc9756a927918ca7629f92f65f0efeea5c3ac8c3db7bc1e87c" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.839130 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drwkk"] Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.870822 4675 scope.go:117] "RemoveContainer" containerID="908058163aca56955025646a3e3aae8abc22a311f6a3053015ecfdf03b99deb0" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.874041 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-drwkk"] Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.927605 4675 scope.go:117] "RemoveContainer" containerID="6e326c935c1b5ce80dc27a4587efe37df7369fb259ed2c7b2266ccc83efe5c9b" Nov 21 14:32:06 crc kubenswrapper[4675]: E1121 14:32:06.929057 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e326c935c1b5ce80dc27a4587efe37df7369fb259ed2c7b2266ccc83efe5c9b\": container with ID starting with 6e326c935c1b5ce80dc27a4587efe37df7369fb259ed2c7b2266ccc83efe5c9b not found: ID does not exist" containerID="6e326c935c1b5ce80dc27a4587efe37df7369fb259ed2c7b2266ccc83efe5c9b" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.929277 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e326c935c1b5ce80dc27a4587efe37df7369fb259ed2c7b2266ccc83efe5c9b"} err="failed to get container status \"6e326c935c1b5ce80dc27a4587efe37df7369fb259ed2c7b2266ccc83efe5c9b\": rpc error: code = NotFound desc = could not find container \"6e326c935c1b5ce80dc27a4587efe37df7369fb259ed2c7b2266ccc83efe5c9b\": container with ID starting with 6e326c935c1b5ce80dc27a4587efe37df7369fb259ed2c7b2266ccc83efe5c9b not found: ID does not exist" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.929317 4675 scope.go:117] "RemoveContainer" containerID="1b6b037b3ceb0afc9756a927918ca7629f92f65f0efeea5c3ac8c3db7bc1e87c" Nov 21 14:32:06 crc kubenswrapper[4675]: E1121 14:32:06.930180 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b6b037b3ceb0afc9756a927918ca7629f92f65f0efeea5c3ac8c3db7bc1e87c\": container with ID starting with 1b6b037b3ceb0afc9756a927918ca7629f92f65f0efeea5c3ac8c3db7bc1e87c not found: ID does not exist" containerID="1b6b037b3ceb0afc9756a927918ca7629f92f65f0efeea5c3ac8c3db7bc1e87c" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.930212 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b6b037b3ceb0afc9756a927918ca7629f92f65f0efeea5c3ac8c3db7bc1e87c"} err="failed to get container status \"1b6b037b3ceb0afc9756a927918ca7629f92f65f0efeea5c3ac8c3db7bc1e87c\": rpc error: code = NotFound desc = could not find container \"1b6b037b3ceb0afc9756a927918ca7629f92f65f0efeea5c3ac8c3db7bc1e87c\": container with ID starting with 1b6b037b3ceb0afc9756a927918ca7629f92f65f0efeea5c3ac8c3db7bc1e87c not found: ID does not exist" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.930234 4675 scope.go:117] "RemoveContainer" containerID="908058163aca56955025646a3e3aae8abc22a311f6a3053015ecfdf03b99deb0" Nov 21 14:32:06 crc kubenswrapper[4675]: E1121 14:32:06.930587 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"908058163aca56955025646a3e3aae8abc22a311f6a3053015ecfdf03b99deb0\": container with ID starting with 908058163aca56955025646a3e3aae8abc22a311f6a3053015ecfdf03b99deb0 not found: ID does not exist" containerID="908058163aca56955025646a3e3aae8abc22a311f6a3053015ecfdf03b99deb0" Nov 21 14:32:06 crc kubenswrapper[4675]: I1121 14:32:06.930616 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"908058163aca56955025646a3e3aae8abc22a311f6a3053015ecfdf03b99deb0"} err="failed to get container status \"908058163aca56955025646a3e3aae8abc22a311f6a3053015ecfdf03b99deb0\": rpc error: code = NotFound desc = could not find container \"908058163aca56955025646a3e3aae8abc22a311f6a3053015ecfdf03b99deb0\": container with ID starting with 908058163aca56955025646a3e3aae8abc22a311f6a3053015ecfdf03b99deb0 not found: ID does not exist" Nov 21 14:32:08 crc kubenswrapper[4675]: I1121 14:32:08.864618 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40a4965a-6418-47e7-8142-3dde1f2a41e7" path="/var/lib/kubelet/pods/40a4965a-6418-47e7-8142-3dde1f2a41e7/volumes" Nov 21 14:32:38 crc kubenswrapper[4675]: E1121 14:32:38.126352 4675 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.162:56298->38.102.83.162:37537: read tcp 38.102.83.162:56298->38.102.83.162:37537: read: connection reset by peer Nov 21 14:32:46 crc kubenswrapper[4675]: I1121 14:32:46.136738 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:32:46 crc kubenswrapper[4675]: I1121 14:32:46.137433 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.037813 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-287bk"] Nov 21 14:32:53 crc kubenswrapper[4675]: E1121 14:32:53.039089 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a4965a-6418-47e7-8142-3dde1f2a41e7" containerName="registry-server" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.039107 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a4965a-6418-47e7-8142-3dde1f2a41e7" containerName="registry-server" Nov 21 14:32:53 crc kubenswrapper[4675]: E1121 14:32:53.039122 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a4965a-6418-47e7-8142-3dde1f2a41e7" containerName="extract-utilities" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.039130 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a4965a-6418-47e7-8142-3dde1f2a41e7" containerName="extract-utilities" Nov 21 14:32:53 crc kubenswrapper[4675]: E1121 14:32:53.039197 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a4965a-6418-47e7-8142-3dde1f2a41e7" containerName="extract-content" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.039206 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a4965a-6418-47e7-8142-3dde1f2a41e7" containerName="extract-content" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.039466 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="40a4965a-6418-47e7-8142-3dde1f2a41e7" containerName="registry-server" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.042145 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.051563 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-287bk"] Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.130175 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-utilities\") pod \"redhat-operators-287bk\" (UID: \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\") " pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.130248 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-catalog-content\") pod \"redhat-operators-287bk\" (UID: \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\") " pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.130274 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgxk6\" (UniqueName: \"kubernetes.io/projected/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-kube-api-access-sgxk6\") pod \"redhat-operators-287bk\" (UID: \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\") " pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.232681 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-catalog-content\") pod \"redhat-operators-287bk\" (UID: \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\") " pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.232739 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgxk6\" (UniqueName: \"kubernetes.io/projected/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-kube-api-access-sgxk6\") pod \"redhat-operators-287bk\" (UID: \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\") " pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.232962 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-utilities\") pod \"redhat-operators-287bk\" (UID: \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\") " pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.233211 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-catalog-content\") pod \"redhat-operators-287bk\" (UID: \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\") " pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.233416 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-utilities\") pod \"redhat-operators-287bk\" (UID: \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\") " pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.260625 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgxk6\" (UniqueName: \"kubernetes.io/projected/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-kube-api-access-sgxk6\") pod \"redhat-operators-287bk\" (UID: \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\") " pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.367236 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:32:53 crc kubenswrapper[4675]: I1121 14:32:53.873720 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-287bk"] Nov 21 14:32:54 crc kubenswrapper[4675]: I1121 14:32:54.312234 4675 generic.go:334] "Generic (PLEG): container finished" podID="f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" containerID="724e859c4855e6a162b4544c06bdc88660530f1281ad77eee924cd3f8a503825" exitCode=0 Nov 21 14:32:54 crc kubenswrapper[4675]: I1121 14:32:54.312324 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-287bk" event={"ID":"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7","Type":"ContainerDied","Data":"724e859c4855e6a162b4544c06bdc88660530f1281ad77eee924cd3f8a503825"} Nov 21 14:32:54 crc kubenswrapper[4675]: I1121 14:32:54.312635 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-287bk" event={"ID":"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7","Type":"ContainerStarted","Data":"cb1996b6e59412b63ff57e376117ef52359c52e869797d94c7b45ec45682bc92"} Nov 21 14:32:55 crc kubenswrapper[4675]: I1121 14:32:55.324850 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-287bk" event={"ID":"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7","Type":"ContainerStarted","Data":"f26068ebd714272cef047b71bc8c1d166757b35c89ef5ac9379887f24db21985"} Nov 21 14:33:02 crc kubenswrapper[4675]: I1121 14:33:02.412660 4675 generic.go:334] "Generic (PLEG): container finished" podID="f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" containerID="f26068ebd714272cef047b71bc8c1d166757b35c89ef5ac9379887f24db21985" exitCode=0 Nov 21 14:33:02 crc kubenswrapper[4675]: I1121 14:33:02.412740 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-287bk" event={"ID":"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7","Type":"ContainerDied","Data":"f26068ebd714272cef047b71bc8c1d166757b35c89ef5ac9379887f24db21985"} Nov 21 14:33:03 crc kubenswrapper[4675]: I1121 14:33:03.424002 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-287bk" event={"ID":"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7","Type":"ContainerStarted","Data":"a97a7c40888d58c571cf60142752c63884201c51720b51adc4414cf52feb7c7b"} Nov 21 14:33:03 crc kubenswrapper[4675]: I1121 14:33:03.452611 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-287bk" podStartSLOduration=1.934474646 podStartE2EDuration="10.452589798s" podCreationTimestamp="2025-11-21 14:32:53 +0000 UTC" firstStartedPulling="2025-11-21 14:32:54.314260765 +0000 UTC m=+3651.040675492" lastFinishedPulling="2025-11-21 14:33:02.832375917 +0000 UTC m=+3659.558790644" observedRunningTime="2025-11-21 14:33:03.441881697 +0000 UTC m=+3660.168296424" watchObservedRunningTime="2025-11-21 14:33:03.452589798 +0000 UTC m=+3660.179004525" Nov 21 14:33:13 crc kubenswrapper[4675]: I1121 14:33:13.367842 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:33:13 crc kubenswrapper[4675]: I1121 14:33:13.368491 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:33:14 crc kubenswrapper[4675]: I1121 14:33:14.427828 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-287bk" podUID="f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" containerName="registry-server" probeResult="failure" output=< Nov 21 14:33:14 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:33:14 crc kubenswrapper[4675]: > Nov 21 14:33:16 crc kubenswrapper[4675]: I1121 14:33:16.136210 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:33:16 crc kubenswrapper[4675]: I1121 14:33:16.136593 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:33:23 crc kubenswrapper[4675]: I1121 14:33:23.422533 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:33:23 crc kubenswrapper[4675]: I1121 14:33:23.495628 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:33:24 crc kubenswrapper[4675]: I1121 14:33:24.250198 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-287bk"] Nov 21 14:33:24 crc kubenswrapper[4675]: I1121 14:33:24.689819 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-287bk" podUID="f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" containerName="registry-server" containerID="cri-o://a97a7c40888d58c571cf60142752c63884201c51720b51adc4414cf52feb7c7b" gracePeriod=2 Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.241957 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.323242 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-catalog-content\") pod \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\" (UID: \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\") " Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.338319 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgxk6\" (UniqueName: \"kubernetes.io/projected/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-kube-api-access-sgxk6\") pod \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\" (UID: \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\") " Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.338444 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-utilities\") pod \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\" (UID: \"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7\") " Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.339146 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-utilities" (OuterVolumeSpecName: "utilities") pod "f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" (UID: "f383465b-2fcf-4e1c-b36a-f8bb2784c2f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.340091 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.343929 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-kube-api-access-sgxk6" (OuterVolumeSpecName: "kube-api-access-sgxk6") pod "f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" (UID: "f383465b-2fcf-4e1c-b36a-f8bb2784c2f7"). InnerVolumeSpecName "kube-api-access-sgxk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.415635 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" (UID: "f383465b-2fcf-4e1c-b36a-f8bb2784c2f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.442237 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgxk6\" (UniqueName: \"kubernetes.io/projected/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-kube-api-access-sgxk6\") on node \"crc\" DevicePath \"\"" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.442267 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.713857 4675 generic.go:334] "Generic (PLEG): container finished" podID="f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" containerID="a97a7c40888d58c571cf60142752c63884201c51720b51adc4414cf52feb7c7b" exitCode=0 Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.713899 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-287bk" event={"ID":"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7","Type":"ContainerDied","Data":"a97a7c40888d58c571cf60142752c63884201c51720b51adc4414cf52feb7c7b"} Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.713928 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-287bk" event={"ID":"f383465b-2fcf-4e1c-b36a-f8bb2784c2f7","Type":"ContainerDied","Data":"cb1996b6e59412b63ff57e376117ef52359c52e869797d94c7b45ec45682bc92"} Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.713948 4675 scope.go:117] "RemoveContainer" containerID="a97a7c40888d58c571cf60142752c63884201c51720b51adc4414cf52feb7c7b" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.714112 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-287bk" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.761978 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-287bk"] Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.764385 4675 scope.go:117] "RemoveContainer" containerID="f26068ebd714272cef047b71bc8c1d166757b35c89ef5ac9379887f24db21985" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.771674 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-287bk"] Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.793498 4675 scope.go:117] "RemoveContainer" containerID="724e859c4855e6a162b4544c06bdc88660530f1281ad77eee924cd3f8a503825" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.858329 4675 scope.go:117] "RemoveContainer" containerID="a97a7c40888d58c571cf60142752c63884201c51720b51adc4414cf52feb7c7b" Nov 21 14:33:25 crc kubenswrapper[4675]: E1121 14:33:25.859792 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a97a7c40888d58c571cf60142752c63884201c51720b51adc4414cf52feb7c7b\": container with ID starting with a97a7c40888d58c571cf60142752c63884201c51720b51adc4414cf52feb7c7b not found: ID does not exist" containerID="a97a7c40888d58c571cf60142752c63884201c51720b51adc4414cf52feb7c7b" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.859857 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a97a7c40888d58c571cf60142752c63884201c51720b51adc4414cf52feb7c7b"} err="failed to get container status \"a97a7c40888d58c571cf60142752c63884201c51720b51adc4414cf52feb7c7b\": rpc error: code = NotFound desc = could not find container \"a97a7c40888d58c571cf60142752c63884201c51720b51adc4414cf52feb7c7b\": container with ID starting with a97a7c40888d58c571cf60142752c63884201c51720b51adc4414cf52feb7c7b not found: ID does not exist" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.859890 4675 scope.go:117] "RemoveContainer" containerID="f26068ebd714272cef047b71bc8c1d166757b35c89ef5ac9379887f24db21985" Nov 21 14:33:25 crc kubenswrapper[4675]: E1121 14:33:25.860431 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f26068ebd714272cef047b71bc8c1d166757b35c89ef5ac9379887f24db21985\": container with ID starting with f26068ebd714272cef047b71bc8c1d166757b35c89ef5ac9379887f24db21985 not found: ID does not exist" containerID="f26068ebd714272cef047b71bc8c1d166757b35c89ef5ac9379887f24db21985" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.860461 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f26068ebd714272cef047b71bc8c1d166757b35c89ef5ac9379887f24db21985"} err="failed to get container status \"f26068ebd714272cef047b71bc8c1d166757b35c89ef5ac9379887f24db21985\": rpc error: code = NotFound desc = could not find container \"f26068ebd714272cef047b71bc8c1d166757b35c89ef5ac9379887f24db21985\": container with ID starting with f26068ebd714272cef047b71bc8c1d166757b35c89ef5ac9379887f24db21985 not found: ID does not exist" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.860484 4675 scope.go:117] "RemoveContainer" containerID="724e859c4855e6a162b4544c06bdc88660530f1281ad77eee924cd3f8a503825" Nov 21 14:33:25 crc kubenswrapper[4675]: E1121 14:33:25.861150 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"724e859c4855e6a162b4544c06bdc88660530f1281ad77eee924cd3f8a503825\": container with ID starting with 724e859c4855e6a162b4544c06bdc88660530f1281ad77eee924cd3f8a503825 not found: ID does not exist" containerID="724e859c4855e6a162b4544c06bdc88660530f1281ad77eee924cd3f8a503825" Nov 21 14:33:25 crc kubenswrapper[4675]: I1121 14:33:25.861187 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"724e859c4855e6a162b4544c06bdc88660530f1281ad77eee924cd3f8a503825"} err="failed to get container status \"724e859c4855e6a162b4544c06bdc88660530f1281ad77eee924cd3f8a503825\": rpc error: code = NotFound desc = could not find container \"724e859c4855e6a162b4544c06bdc88660530f1281ad77eee924cd3f8a503825\": container with ID starting with 724e859c4855e6a162b4544c06bdc88660530f1281ad77eee924cd3f8a503825 not found: ID does not exist" Nov 21 14:33:26 crc kubenswrapper[4675]: I1121 14:33:26.875396 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" path="/var/lib/kubelet/pods/f383465b-2fcf-4e1c-b36a-f8bb2784c2f7/volumes" Nov 21 14:33:46 crc kubenswrapper[4675]: I1121 14:33:46.136786 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:33:46 crc kubenswrapper[4675]: I1121 14:33:46.137704 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:33:46 crc kubenswrapper[4675]: I1121 14:33:46.137788 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 14:33:46 crc kubenswrapper[4675]: I1121 14:33:46.139106 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:33:46 crc kubenswrapper[4675]: I1121 14:33:46.139214 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" gracePeriod=600 Nov 21 14:33:46 crc kubenswrapper[4675]: E1121 14:33:46.269370 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:33:47 crc kubenswrapper[4675]: I1121 14:33:47.017668 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" exitCode=0 Nov 21 14:33:47 crc kubenswrapper[4675]: I1121 14:33:47.017802 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c"} Nov 21 14:33:47 crc kubenswrapper[4675]: I1121 14:33:47.018236 4675 scope.go:117] "RemoveContainer" containerID="b7be1ca3661b800d7e1be3df35e623bd7dc17c97289051a01313c20529119f4c" Nov 21 14:33:47 crc kubenswrapper[4675]: I1121 14:33:47.019437 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:33:47 crc kubenswrapper[4675]: E1121 14:33:47.020158 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:33:54 crc kubenswrapper[4675]: E1121 14:33:54.642730 4675 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.162:43128->38.102.83.162:37537: write tcp 38.102.83.162:43128->38.102.83.162:37537: write: broken pipe Nov 21 14:34:00 crc kubenswrapper[4675]: I1121 14:34:00.849867 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:34:00 crc kubenswrapper[4675]: E1121 14:34:00.850959 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:34:11 crc kubenswrapper[4675]: I1121 14:34:11.848876 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:34:11 crc kubenswrapper[4675]: E1121 14:34:11.849703 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:34:24 crc kubenswrapper[4675]: I1121 14:34:24.857247 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:34:24 crc kubenswrapper[4675]: E1121 14:34:24.858128 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:34:38 crc kubenswrapper[4675]: I1121 14:34:38.849982 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:34:38 crc kubenswrapper[4675]: E1121 14:34:38.850831 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:34:52 crc kubenswrapper[4675]: I1121 14:34:52.848856 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:34:52 crc kubenswrapper[4675]: E1121 14:34:52.849673 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:35:03 crc kubenswrapper[4675]: I1121 14:35:03.850041 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:35:03 crc kubenswrapper[4675]: E1121 14:35:03.851133 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:35:16 crc kubenswrapper[4675]: I1121 14:35:16.850322 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:35:16 crc kubenswrapper[4675]: E1121 14:35:16.851166 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:35:31 crc kubenswrapper[4675]: I1121 14:35:31.850026 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:35:31 crc kubenswrapper[4675]: E1121 14:35:31.851532 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:35:42 crc kubenswrapper[4675]: I1121 14:35:42.849407 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:35:42 crc kubenswrapper[4675]: E1121 14:35:42.850480 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:35:56 crc kubenswrapper[4675]: I1121 14:35:56.850107 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:35:56 crc kubenswrapper[4675]: E1121 14:35:56.851519 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:36:08 crc kubenswrapper[4675]: I1121 14:36:08.849243 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:36:08 crc kubenswrapper[4675]: E1121 14:36:08.850093 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:36:21 crc kubenswrapper[4675]: I1121 14:36:21.849287 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:36:21 crc kubenswrapper[4675]: E1121 14:36:21.850251 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:36:32 crc kubenswrapper[4675]: I1121 14:36:32.849339 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:36:32 crc kubenswrapper[4675]: E1121 14:36:32.850212 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:36:44 crc kubenswrapper[4675]: I1121 14:36:44.855629 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:36:44 crc kubenswrapper[4675]: E1121 14:36:44.856294 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:36:58 crc kubenswrapper[4675]: I1121 14:36:58.849524 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:36:58 crc kubenswrapper[4675]: E1121 14:36:58.850288 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:37:10 crc kubenswrapper[4675]: I1121 14:37:10.849779 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:37:10 crc kubenswrapper[4675]: E1121 14:37:10.850626 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:37:21 crc kubenswrapper[4675]: E1121 14:37:21.897187 4675 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.162:57186->38.102.83.162:37537: write tcp 38.102.83.162:57186->38.102.83.162:37537: write: broken pipe Nov 21 14:37:23 crc kubenswrapper[4675]: I1121 14:37:23.849686 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:37:23 crc kubenswrapper[4675]: E1121 14:37:23.850415 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:37:37 crc kubenswrapper[4675]: I1121 14:37:37.850061 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:37:37 crc kubenswrapper[4675]: E1121 14:37:37.850944 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:37:52 crc kubenswrapper[4675]: I1121 14:37:52.850126 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:37:52 crc kubenswrapper[4675]: E1121 14:37:52.851038 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:38:06 crc kubenswrapper[4675]: I1121 14:38:06.850477 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:38:06 crc kubenswrapper[4675]: E1121 14:38:06.851621 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:38:18 crc kubenswrapper[4675]: I1121 14:38:18.849770 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:38:18 crc kubenswrapper[4675]: E1121 14:38:18.850990 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:38:30 crc kubenswrapper[4675]: I1121 14:38:30.850826 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:38:30 crc kubenswrapper[4675]: E1121 14:38:30.851888 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:38:41 crc kubenswrapper[4675]: I1121 14:38:41.849239 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:38:41 crc kubenswrapper[4675]: E1121 14:38:41.850046 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:38:55 crc kubenswrapper[4675]: I1121 14:38:55.849758 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:38:56 crc kubenswrapper[4675]: I1121 14:38:56.985910 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"269816fb93c202a0c84f4606751e8acce9638cd3d82728a5f2d8489920decfbb"} Nov 21 14:41:16 crc kubenswrapper[4675]: I1121 14:41:16.136235 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:41:16 crc kubenswrapper[4675]: I1121 14:41:16.137115 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:41:46 crc kubenswrapper[4675]: I1121 14:41:46.136603 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:41:46 crc kubenswrapper[4675]: I1121 14:41:46.137343 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.242978 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wnk97"] Nov 21 14:42:10 crc kubenswrapper[4675]: E1121 14:42:10.244575 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" containerName="extract-utilities" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.244601 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" containerName="extract-utilities" Nov 21 14:42:10 crc kubenswrapper[4675]: E1121 14:42:10.244629 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" containerName="registry-server" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.244641 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" containerName="registry-server" Nov 21 14:42:10 crc kubenswrapper[4675]: E1121 14:42:10.244675 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" containerName="extract-content" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.244687 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" containerName="extract-content" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.245167 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f383465b-2fcf-4e1c-b36a-f8bb2784c2f7" containerName="registry-server" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.248257 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.265127 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wnk97"] Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.348475 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55dl9\" (UniqueName: \"kubernetes.io/projected/1a60cf93-6701-402c-a116-5de1a66139b8-kube-api-access-55dl9\") pod \"certified-operators-wnk97\" (UID: \"1a60cf93-6701-402c-a116-5de1a66139b8\") " pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.348808 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a60cf93-6701-402c-a116-5de1a66139b8-catalog-content\") pod \"certified-operators-wnk97\" (UID: \"1a60cf93-6701-402c-a116-5de1a66139b8\") " pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.349342 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a60cf93-6701-402c-a116-5de1a66139b8-utilities\") pod \"certified-operators-wnk97\" (UID: \"1a60cf93-6701-402c-a116-5de1a66139b8\") " pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.451232 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a60cf93-6701-402c-a116-5de1a66139b8-utilities\") pod \"certified-operators-wnk97\" (UID: \"1a60cf93-6701-402c-a116-5de1a66139b8\") " pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.451342 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55dl9\" (UniqueName: \"kubernetes.io/projected/1a60cf93-6701-402c-a116-5de1a66139b8-kube-api-access-55dl9\") pod \"certified-operators-wnk97\" (UID: \"1a60cf93-6701-402c-a116-5de1a66139b8\") " pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.451374 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a60cf93-6701-402c-a116-5de1a66139b8-catalog-content\") pod \"certified-operators-wnk97\" (UID: \"1a60cf93-6701-402c-a116-5de1a66139b8\") " pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.451925 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a60cf93-6701-402c-a116-5de1a66139b8-utilities\") pod \"certified-operators-wnk97\" (UID: \"1a60cf93-6701-402c-a116-5de1a66139b8\") " pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.452138 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a60cf93-6701-402c-a116-5de1a66139b8-catalog-content\") pod \"certified-operators-wnk97\" (UID: \"1a60cf93-6701-402c-a116-5de1a66139b8\") " pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.472234 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55dl9\" (UniqueName: \"kubernetes.io/projected/1a60cf93-6701-402c-a116-5de1a66139b8-kube-api-access-55dl9\") pod \"certified-operators-wnk97\" (UID: \"1a60cf93-6701-402c-a116-5de1a66139b8\") " pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:10 crc kubenswrapper[4675]: I1121 14:42:10.582887 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:11 crc kubenswrapper[4675]: I1121 14:42:11.093360 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wnk97"] Nov 21 14:42:11 crc kubenswrapper[4675]: I1121 14:42:11.589361 4675 generic.go:334] "Generic (PLEG): container finished" podID="1a60cf93-6701-402c-a116-5de1a66139b8" containerID="d2c46e6d5aeeab90e4d402bf0e248104a3eb535024f55a787f3de28e711e9a98" exitCode=0 Nov 21 14:42:11 crc kubenswrapper[4675]: I1121 14:42:11.589457 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnk97" event={"ID":"1a60cf93-6701-402c-a116-5de1a66139b8","Type":"ContainerDied","Data":"d2c46e6d5aeeab90e4d402bf0e248104a3eb535024f55a787f3de28e711e9a98"} Nov 21 14:42:11 crc kubenswrapper[4675]: I1121 14:42:11.589637 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnk97" event={"ID":"1a60cf93-6701-402c-a116-5de1a66139b8","Type":"ContainerStarted","Data":"33099bb56ff6118d7232f51f0d66e0040f23bfbaf46c1ca71a4b1727ca1fef65"} Nov 21 14:42:11 crc kubenswrapper[4675]: I1121 14:42:11.591827 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:42:12 crc kubenswrapper[4675]: I1121 14:42:12.605011 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnk97" event={"ID":"1a60cf93-6701-402c-a116-5de1a66139b8","Type":"ContainerStarted","Data":"5f44748ac06e871102c65b99c3b4968f97981770bd413f57c2cc9e3d13fd86c3"} Nov 21 14:42:15 crc kubenswrapper[4675]: I1121 14:42:15.645052 4675 generic.go:334] "Generic (PLEG): container finished" podID="1a60cf93-6701-402c-a116-5de1a66139b8" containerID="5f44748ac06e871102c65b99c3b4968f97981770bd413f57c2cc9e3d13fd86c3" exitCode=0 Nov 21 14:42:15 crc kubenswrapper[4675]: I1121 14:42:15.645132 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnk97" event={"ID":"1a60cf93-6701-402c-a116-5de1a66139b8","Type":"ContainerDied","Data":"5f44748ac06e871102c65b99c3b4968f97981770bd413f57c2cc9e3d13fd86c3"} Nov 21 14:42:16 crc kubenswrapper[4675]: I1121 14:42:16.136560 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:42:16 crc kubenswrapper[4675]: I1121 14:42:16.137435 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:42:16 crc kubenswrapper[4675]: I1121 14:42:16.137706 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 14:42:16 crc kubenswrapper[4675]: I1121 14:42:16.139458 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"269816fb93c202a0c84f4606751e8acce9638cd3d82728a5f2d8489920decfbb"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:42:16 crc kubenswrapper[4675]: I1121 14:42:16.139762 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://269816fb93c202a0c84f4606751e8acce9638cd3d82728a5f2d8489920decfbb" gracePeriod=600 Nov 21 14:42:16 crc kubenswrapper[4675]: I1121 14:42:16.663175 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="269816fb93c202a0c84f4606751e8acce9638cd3d82728a5f2d8489920decfbb" exitCode=0 Nov 21 14:42:16 crc kubenswrapper[4675]: I1121 14:42:16.663337 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"269816fb93c202a0c84f4606751e8acce9638cd3d82728a5f2d8489920decfbb"} Nov 21 14:42:16 crc kubenswrapper[4675]: I1121 14:42:16.663523 4675 scope.go:117] "RemoveContainer" containerID="2898e833204a34ff3f92bad559a9cb5cf5ecbe1d3d55e4749fa62481ae4f782c" Nov 21 14:42:17 crc kubenswrapper[4675]: I1121 14:42:17.681550 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnk97" event={"ID":"1a60cf93-6701-402c-a116-5de1a66139b8","Type":"ContainerStarted","Data":"f288de031dbe07a4da162d06d716e2395bf88b980c83222b41ccbd2d1f085d8b"} Nov 21 14:42:17 crc kubenswrapper[4675]: I1121 14:42:17.687451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8"} Nov 21 14:42:17 crc kubenswrapper[4675]: I1121 14:42:17.715761 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wnk97" podStartSLOduration=3.245565726 podStartE2EDuration="7.715739299s" podCreationTimestamp="2025-11-21 14:42:10 +0000 UTC" firstStartedPulling="2025-11-21 14:42:11.591570817 +0000 UTC m=+4208.317985544" lastFinishedPulling="2025-11-21 14:42:16.06174439 +0000 UTC m=+4212.788159117" observedRunningTime="2025-11-21 14:42:17.70825147 +0000 UTC m=+4214.434666227" watchObservedRunningTime="2025-11-21 14:42:17.715739299 +0000 UTC m=+4214.442154026" Nov 21 14:42:20 crc kubenswrapper[4675]: I1121 14:42:20.583523 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:20 crc kubenswrapper[4675]: I1121 14:42:20.585071 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:21 crc kubenswrapper[4675]: I1121 14:42:21.656468 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wnk97" podUID="1a60cf93-6701-402c-a116-5de1a66139b8" containerName="registry-server" probeResult="failure" output=< Nov 21 14:42:21 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:42:21 crc kubenswrapper[4675]: > Nov 21 14:42:24 crc kubenswrapper[4675]: I1121 14:42:24.047305 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w2twk"] Nov 21 14:42:24 crc kubenswrapper[4675]: I1121 14:42:24.054563 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:24 crc kubenswrapper[4675]: I1121 14:42:24.061299 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2twk"] Nov 21 14:42:24 crc kubenswrapper[4675]: I1121 14:42:24.127192 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ed746c-e876-4d38-a91e-6edf38b21347-utilities\") pod \"community-operators-w2twk\" (UID: \"23ed746c-e876-4d38-a91e-6edf38b21347\") " pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:24 crc kubenswrapper[4675]: I1121 14:42:24.127361 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ed746c-e876-4d38-a91e-6edf38b21347-catalog-content\") pod \"community-operators-w2twk\" (UID: \"23ed746c-e876-4d38-a91e-6edf38b21347\") " pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:24 crc kubenswrapper[4675]: I1121 14:42:24.127407 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2xxc\" (UniqueName: \"kubernetes.io/projected/23ed746c-e876-4d38-a91e-6edf38b21347-kube-api-access-t2xxc\") pod \"community-operators-w2twk\" (UID: \"23ed746c-e876-4d38-a91e-6edf38b21347\") " pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:24 crc kubenswrapper[4675]: I1121 14:42:24.229403 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ed746c-e876-4d38-a91e-6edf38b21347-catalog-content\") pod \"community-operators-w2twk\" (UID: \"23ed746c-e876-4d38-a91e-6edf38b21347\") " pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:24 crc kubenswrapper[4675]: I1121 14:42:24.229475 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2xxc\" (UniqueName: \"kubernetes.io/projected/23ed746c-e876-4d38-a91e-6edf38b21347-kube-api-access-t2xxc\") pod \"community-operators-w2twk\" (UID: \"23ed746c-e876-4d38-a91e-6edf38b21347\") " pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:24 crc kubenswrapper[4675]: I1121 14:42:24.229591 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ed746c-e876-4d38-a91e-6edf38b21347-utilities\") pod \"community-operators-w2twk\" (UID: \"23ed746c-e876-4d38-a91e-6edf38b21347\") " pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:24 crc kubenswrapper[4675]: I1121 14:42:24.229929 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ed746c-e876-4d38-a91e-6edf38b21347-catalog-content\") pod \"community-operators-w2twk\" (UID: \"23ed746c-e876-4d38-a91e-6edf38b21347\") " pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:24 crc kubenswrapper[4675]: I1121 14:42:24.230173 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ed746c-e876-4d38-a91e-6edf38b21347-utilities\") pod \"community-operators-w2twk\" (UID: \"23ed746c-e876-4d38-a91e-6edf38b21347\") " pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:24 crc kubenswrapper[4675]: I1121 14:42:24.251134 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2xxc\" (UniqueName: \"kubernetes.io/projected/23ed746c-e876-4d38-a91e-6edf38b21347-kube-api-access-t2xxc\") pod \"community-operators-w2twk\" (UID: \"23ed746c-e876-4d38-a91e-6edf38b21347\") " pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:24 crc kubenswrapper[4675]: I1121 14:42:24.410764 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:25 crc kubenswrapper[4675]: I1121 14:42:25.004947 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2twk"] Nov 21 14:42:25 crc kubenswrapper[4675]: E1121 14:42:25.663275 4675 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.162:51432->38.102.83.162:37537: write tcp 38.102.83.162:51432->38.102.83.162:37537: write: connection reset by peer Nov 21 14:42:25 crc kubenswrapper[4675]: I1121 14:42:25.827283 4675 generic.go:334] "Generic (PLEG): container finished" podID="23ed746c-e876-4d38-a91e-6edf38b21347" containerID="a019797dd8cb14d6f9145e72bff33e0f32bb25eb4c9a655cf46d50907c525f9c" exitCode=0 Nov 21 14:42:25 crc kubenswrapper[4675]: I1121 14:42:25.827328 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2twk" event={"ID":"23ed746c-e876-4d38-a91e-6edf38b21347","Type":"ContainerDied","Data":"a019797dd8cb14d6f9145e72bff33e0f32bb25eb4c9a655cf46d50907c525f9c"} Nov 21 14:42:25 crc kubenswrapper[4675]: I1121 14:42:25.827355 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2twk" event={"ID":"23ed746c-e876-4d38-a91e-6edf38b21347","Type":"ContainerStarted","Data":"1c9088dc4b46dcb37326adea4336c09c714ac91e8b4f014b249cb99158e3806b"} Nov 21 14:42:26 crc kubenswrapper[4675]: I1121 14:42:26.846343 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2twk" event={"ID":"23ed746c-e876-4d38-a91e-6edf38b21347","Type":"ContainerStarted","Data":"cc649b5cc0b3a38c8e168bb16d11a1664845a40f79ed9a416ff272e6c8151f9f"} Nov 21 14:42:28 crc kubenswrapper[4675]: I1121 14:42:28.904323 4675 generic.go:334] "Generic (PLEG): container finished" podID="23ed746c-e876-4d38-a91e-6edf38b21347" containerID="cc649b5cc0b3a38c8e168bb16d11a1664845a40f79ed9a416ff272e6c8151f9f" exitCode=0 Nov 21 14:42:28 crc kubenswrapper[4675]: I1121 14:42:28.906725 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2twk" event={"ID":"23ed746c-e876-4d38-a91e-6edf38b21347","Type":"ContainerDied","Data":"cc649b5cc0b3a38c8e168bb16d11a1664845a40f79ed9a416ff272e6c8151f9f"} Nov 21 14:42:30 crc kubenswrapper[4675]: I1121 14:42:30.889277 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:30 crc kubenswrapper[4675]: I1121 14:42:30.938703 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2twk" event={"ID":"23ed746c-e876-4d38-a91e-6edf38b21347","Type":"ContainerStarted","Data":"cf89861ffa938848382e97b4465014e2a898c157dd8ec12cb5b4844bc6b4b19a"} Nov 21 14:42:30 crc kubenswrapper[4675]: I1121 14:42:30.948193 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:30 crc kubenswrapper[4675]: I1121 14:42:30.962506 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w2twk" podStartSLOduration=2.217943874 podStartE2EDuration="6.962488164s" podCreationTimestamp="2025-11-21 14:42:24 +0000 UTC" firstStartedPulling="2025-11-21 14:42:25.829352672 +0000 UTC m=+4222.555767409" lastFinishedPulling="2025-11-21 14:42:30.573896942 +0000 UTC m=+4227.300311699" observedRunningTime="2025-11-21 14:42:30.959024866 +0000 UTC m=+4227.685439593" watchObservedRunningTime="2025-11-21 14:42:30.962488164 +0000 UTC m=+4227.688902891" Nov 21 14:42:32 crc kubenswrapper[4675]: I1121 14:42:32.428283 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wnk97"] Nov 21 14:42:32 crc kubenswrapper[4675]: I1121 14:42:32.428913 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wnk97" podUID="1a60cf93-6701-402c-a116-5de1a66139b8" containerName="registry-server" containerID="cri-o://f288de031dbe07a4da162d06d716e2395bf88b980c83222b41ccbd2d1f085d8b" gracePeriod=2 Nov 21 14:42:32 crc kubenswrapper[4675]: I1121 14:42:32.967983 4675 generic.go:334] "Generic (PLEG): container finished" podID="1a60cf93-6701-402c-a116-5de1a66139b8" containerID="f288de031dbe07a4da162d06d716e2395bf88b980c83222b41ccbd2d1f085d8b" exitCode=0 Nov 21 14:42:32 crc kubenswrapper[4675]: I1121 14:42:32.968122 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnk97" event={"ID":"1a60cf93-6701-402c-a116-5de1a66139b8","Type":"ContainerDied","Data":"f288de031dbe07a4da162d06d716e2395bf88b980c83222b41ccbd2d1f085d8b"} Nov 21 14:42:33 crc kubenswrapper[4675]: I1121 14:42:33.782601 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:33 crc kubenswrapper[4675]: I1121 14:42:33.920317 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a60cf93-6701-402c-a116-5de1a66139b8-catalog-content\") pod \"1a60cf93-6701-402c-a116-5de1a66139b8\" (UID: \"1a60cf93-6701-402c-a116-5de1a66139b8\") " Nov 21 14:42:33 crc kubenswrapper[4675]: I1121 14:42:33.920385 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a60cf93-6701-402c-a116-5de1a66139b8-utilities\") pod \"1a60cf93-6701-402c-a116-5de1a66139b8\" (UID: \"1a60cf93-6701-402c-a116-5de1a66139b8\") " Nov 21 14:42:33 crc kubenswrapper[4675]: I1121 14:42:33.920540 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55dl9\" (UniqueName: \"kubernetes.io/projected/1a60cf93-6701-402c-a116-5de1a66139b8-kube-api-access-55dl9\") pod \"1a60cf93-6701-402c-a116-5de1a66139b8\" (UID: \"1a60cf93-6701-402c-a116-5de1a66139b8\") " Nov 21 14:42:33 crc kubenswrapper[4675]: I1121 14:42:33.921692 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a60cf93-6701-402c-a116-5de1a66139b8-utilities" (OuterVolumeSpecName: "utilities") pod "1a60cf93-6701-402c-a116-5de1a66139b8" (UID: "1a60cf93-6701-402c-a116-5de1a66139b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:42:33 crc kubenswrapper[4675]: I1121 14:42:33.930161 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a60cf93-6701-402c-a116-5de1a66139b8-kube-api-access-55dl9" (OuterVolumeSpecName: "kube-api-access-55dl9") pod "1a60cf93-6701-402c-a116-5de1a66139b8" (UID: "1a60cf93-6701-402c-a116-5de1a66139b8"). InnerVolumeSpecName "kube-api-access-55dl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:42:33 crc kubenswrapper[4675]: I1121 14:42:33.985334 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnk97" event={"ID":"1a60cf93-6701-402c-a116-5de1a66139b8","Type":"ContainerDied","Data":"33099bb56ff6118d7232f51f0d66e0040f23bfbaf46c1ca71a4b1727ca1fef65"} Nov 21 14:42:33 crc kubenswrapper[4675]: I1121 14:42:33.985390 4675 scope.go:117] "RemoveContainer" containerID="f288de031dbe07a4da162d06d716e2395bf88b980c83222b41ccbd2d1f085d8b" Nov 21 14:42:33 crc kubenswrapper[4675]: I1121 14:42:33.985558 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnk97" Nov 21 14:42:33 crc kubenswrapper[4675]: I1121 14:42:33.993233 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a60cf93-6701-402c-a116-5de1a66139b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a60cf93-6701-402c-a116-5de1a66139b8" (UID: "1a60cf93-6701-402c-a116-5de1a66139b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:42:34 crc kubenswrapper[4675]: I1121 14:42:34.024011 4675 scope.go:117] "RemoveContainer" containerID="5f44748ac06e871102c65b99c3b4968f97981770bd413f57c2cc9e3d13fd86c3" Nov 21 14:42:34 crc kubenswrapper[4675]: I1121 14:42:34.026792 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a60cf93-6701-402c-a116-5de1a66139b8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:34 crc kubenswrapper[4675]: I1121 14:42:34.026813 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a60cf93-6701-402c-a116-5de1a66139b8-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:34 crc kubenswrapper[4675]: I1121 14:42:34.026823 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55dl9\" (UniqueName: \"kubernetes.io/projected/1a60cf93-6701-402c-a116-5de1a66139b8-kube-api-access-55dl9\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:34 crc kubenswrapper[4675]: I1121 14:42:34.052609 4675 scope.go:117] "RemoveContainer" containerID="d2c46e6d5aeeab90e4d402bf0e248104a3eb535024f55a787f3de28e711e9a98" Nov 21 14:42:34 crc kubenswrapper[4675]: I1121 14:42:34.355119 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wnk97"] Nov 21 14:42:34 crc kubenswrapper[4675]: I1121 14:42:34.369437 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wnk97"] Nov 21 14:42:34 crc kubenswrapper[4675]: I1121 14:42:34.412383 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:34 crc kubenswrapper[4675]: I1121 14:42:34.412671 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:34 crc kubenswrapper[4675]: I1121 14:42:34.486717 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:34 crc kubenswrapper[4675]: I1121 14:42:34.864244 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a60cf93-6701-402c-a116-5de1a66139b8" path="/var/lib/kubelet/pods/1a60cf93-6701-402c-a116-5de1a66139b8/volumes" Nov 21 14:42:36 crc kubenswrapper[4675]: I1121 14:42:36.112136 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:36 crc kubenswrapper[4675]: I1121 14:42:36.818819 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2twk"] Nov 21 14:42:38 crc kubenswrapper[4675]: I1121 14:42:38.049827 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w2twk" podUID="23ed746c-e876-4d38-a91e-6edf38b21347" containerName="registry-server" containerID="cri-o://cf89861ffa938848382e97b4465014e2a898c157dd8ec12cb5b4844bc6b4b19a" gracePeriod=2 Nov 21 14:42:38 crc kubenswrapper[4675]: I1121 14:42:38.557268 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:38 crc kubenswrapper[4675]: I1121 14:42:38.656911 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ed746c-e876-4d38-a91e-6edf38b21347-utilities\") pod \"23ed746c-e876-4d38-a91e-6edf38b21347\" (UID: \"23ed746c-e876-4d38-a91e-6edf38b21347\") " Nov 21 14:42:38 crc kubenswrapper[4675]: I1121 14:42:38.657172 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ed746c-e876-4d38-a91e-6edf38b21347-catalog-content\") pod \"23ed746c-e876-4d38-a91e-6edf38b21347\" (UID: \"23ed746c-e876-4d38-a91e-6edf38b21347\") " Nov 21 14:42:38 crc kubenswrapper[4675]: I1121 14:42:38.657252 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2xxc\" (UniqueName: \"kubernetes.io/projected/23ed746c-e876-4d38-a91e-6edf38b21347-kube-api-access-t2xxc\") pod \"23ed746c-e876-4d38-a91e-6edf38b21347\" (UID: \"23ed746c-e876-4d38-a91e-6edf38b21347\") " Nov 21 14:42:38 crc kubenswrapper[4675]: I1121 14:42:38.658000 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23ed746c-e876-4d38-a91e-6edf38b21347-utilities" (OuterVolumeSpecName: "utilities") pod "23ed746c-e876-4d38-a91e-6edf38b21347" (UID: "23ed746c-e876-4d38-a91e-6edf38b21347"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:42:38 crc kubenswrapper[4675]: I1121 14:42:38.658798 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ed746c-e876-4d38-a91e-6edf38b21347-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:38 crc kubenswrapper[4675]: I1121 14:42:38.667275 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ed746c-e876-4d38-a91e-6edf38b21347-kube-api-access-t2xxc" (OuterVolumeSpecName: "kube-api-access-t2xxc") pod "23ed746c-e876-4d38-a91e-6edf38b21347" (UID: "23ed746c-e876-4d38-a91e-6edf38b21347"). InnerVolumeSpecName "kube-api-access-t2xxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:42:38 crc kubenswrapper[4675]: I1121 14:42:38.724466 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23ed746c-e876-4d38-a91e-6edf38b21347-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23ed746c-e876-4d38-a91e-6edf38b21347" (UID: "23ed746c-e876-4d38-a91e-6edf38b21347"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:42:38 crc kubenswrapper[4675]: I1121 14:42:38.761736 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ed746c-e876-4d38-a91e-6edf38b21347-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:38 crc kubenswrapper[4675]: I1121 14:42:38.761812 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2xxc\" (UniqueName: \"kubernetes.io/projected/23ed746c-e876-4d38-a91e-6edf38b21347-kube-api-access-t2xxc\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.082865 4675 generic.go:334] "Generic (PLEG): container finished" podID="23ed746c-e876-4d38-a91e-6edf38b21347" containerID="cf89861ffa938848382e97b4465014e2a898c157dd8ec12cb5b4844bc6b4b19a" exitCode=0 Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.082905 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2twk" event={"ID":"23ed746c-e876-4d38-a91e-6edf38b21347","Type":"ContainerDied","Data":"cf89861ffa938848382e97b4465014e2a898c157dd8ec12cb5b4844bc6b4b19a"} Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.082932 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2twk" event={"ID":"23ed746c-e876-4d38-a91e-6edf38b21347","Type":"ContainerDied","Data":"1c9088dc4b46dcb37326adea4336c09c714ac91e8b4f014b249cb99158e3806b"} Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.082947 4675 scope.go:117] "RemoveContainer" containerID="cf89861ffa938848382e97b4465014e2a898c157dd8ec12cb5b4844bc6b4b19a" Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.083057 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2twk" Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.110190 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2twk"] Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.119947 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w2twk"] Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.126854 4675 scope.go:117] "RemoveContainer" containerID="cc649b5cc0b3a38c8e168bb16d11a1664845a40f79ed9a416ff272e6c8151f9f" Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.161219 4675 scope.go:117] "RemoveContainer" containerID="a019797dd8cb14d6f9145e72bff33e0f32bb25eb4c9a655cf46d50907c525f9c" Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.222247 4675 scope.go:117] "RemoveContainer" containerID="cf89861ffa938848382e97b4465014e2a898c157dd8ec12cb5b4844bc6b4b19a" Nov 21 14:42:39 crc kubenswrapper[4675]: E1121 14:42:39.222856 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf89861ffa938848382e97b4465014e2a898c157dd8ec12cb5b4844bc6b4b19a\": container with ID starting with cf89861ffa938848382e97b4465014e2a898c157dd8ec12cb5b4844bc6b4b19a not found: ID does not exist" containerID="cf89861ffa938848382e97b4465014e2a898c157dd8ec12cb5b4844bc6b4b19a" Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.222920 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf89861ffa938848382e97b4465014e2a898c157dd8ec12cb5b4844bc6b4b19a"} err="failed to get container status \"cf89861ffa938848382e97b4465014e2a898c157dd8ec12cb5b4844bc6b4b19a\": rpc error: code = NotFound desc = could not find container \"cf89861ffa938848382e97b4465014e2a898c157dd8ec12cb5b4844bc6b4b19a\": container with ID starting with cf89861ffa938848382e97b4465014e2a898c157dd8ec12cb5b4844bc6b4b19a not found: ID does not exist" Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.222962 4675 scope.go:117] "RemoveContainer" containerID="cc649b5cc0b3a38c8e168bb16d11a1664845a40f79ed9a416ff272e6c8151f9f" Nov 21 14:42:39 crc kubenswrapper[4675]: E1121 14:42:39.223441 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc649b5cc0b3a38c8e168bb16d11a1664845a40f79ed9a416ff272e6c8151f9f\": container with ID starting with cc649b5cc0b3a38c8e168bb16d11a1664845a40f79ed9a416ff272e6c8151f9f not found: ID does not exist" containerID="cc649b5cc0b3a38c8e168bb16d11a1664845a40f79ed9a416ff272e6c8151f9f" Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.223581 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc649b5cc0b3a38c8e168bb16d11a1664845a40f79ed9a416ff272e6c8151f9f"} err="failed to get container status \"cc649b5cc0b3a38c8e168bb16d11a1664845a40f79ed9a416ff272e6c8151f9f\": rpc error: code = NotFound desc = could not find container \"cc649b5cc0b3a38c8e168bb16d11a1664845a40f79ed9a416ff272e6c8151f9f\": container with ID starting with cc649b5cc0b3a38c8e168bb16d11a1664845a40f79ed9a416ff272e6c8151f9f not found: ID does not exist" Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.223694 4675 scope.go:117] "RemoveContainer" containerID="a019797dd8cb14d6f9145e72bff33e0f32bb25eb4c9a655cf46d50907c525f9c" Nov 21 14:42:39 crc kubenswrapper[4675]: E1121 14:42:39.224174 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a019797dd8cb14d6f9145e72bff33e0f32bb25eb4c9a655cf46d50907c525f9c\": container with ID starting with a019797dd8cb14d6f9145e72bff33e0f32bb25eb4c9a655cf46d50907c525f9c not found: ID does not exist" containerID="a019797dd8cb14d6f9145e72bff33e0f32bb25eb4c9a655cf46d50907c525f9c" Nov 21 14:42:39 crc kubenswrapper[4675]: I1121 14:42:39.224214 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a019797dd8cb14d6f9145e72bff33e0f32bb25eb4c9a655cf46d50907c525f9c"} err="failed to get container status \"a019797dd8cb14d6f9145e72bff33e0f32bb25eb4c9a655cf46d50907c525f9c\": rpc error: code = NotFound desc = could not find container \"a019797dd8cb14d6f9145e72bff33e0f32bb25eb4c9a655cf46d50907c525f9c\": container with ID starting with a019797dd8cb14d6f9145e72bff33e0f32bb25eb4c9a655cf46d50907c525f9c not found: ID does not exist" Nov 21 14:42:40 crc kubenswrapper[4675]: I1121 14:42:40.871508 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23ed746c-e876-4d38-a91e-6edf38b21347" path="/var/lib/kubelet/pods/23ed746c-e876-4d38-a91e-6edf38b21347/volumes" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.309654 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wtt"] Nov 21 14:43:01 crc kubenswrapper[4675]: E1121 14:43:01.311129 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a60cf93-6701-402c-a116-5de1a66139b8" containerName="extract-utilities" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.311155 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a60cf93-6701-402c-a116-5de1a66139b8" containerName="extract-utilities" Nov 21 14:43:01 crc kubenswrapper[4675]: E1121 14:43:01.311199 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ed746c-e876-4d38-a91e-6edf38b21347" containerName="extract-content" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.311215 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ed746c-e876-4d38-a91e-6edf38b21347" containerName="extract-content" Nov 21 14:43:01 crc kubenswrapper[4675]: E1121 14:43:01.311250 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ed746c-e876-4d38-a91e-6edf38b21347" containerName="extract-utilities" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.311264 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ed746c-e876-4d38-a91e-6edf38b21347" containerName="extract-utilities" Nov 21 14:43:01 crc kubenswrapper[4675]: E1121 14:43:01.311301 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a60cf93-6701-402c-a116-5de1a66139b8" containerName="registry-server" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.311314 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a60cf93-6701-402c-a116-5de1a66139b8" containerName="registry-server" Nov 21 14:43:01 crc kubenswrapper[4675]: E1121 14:43:01.311337 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ed746c-e876-4d38-a91e-6edf38b21347" containerName="registry-server" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.311350 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ed746c-e876-4d38-a91e-6edf38b21347" containerName="registry-server" Nov 21 14:43:01 crc kubenswrapper[4675]: E1121 14:43:01.311403 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a60cf93-6701-402c-a116-5de1a66139b8" containerName="extract-content" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.311416 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a60cf93-6701-402c-a116-5de1a66139b8" containerName="extract-content" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.311903 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a60cf93-6701-402c-a116-5de1a66139b8" containerName="registry-server" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.311952 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ed746c-e876-4d38-a91e-6edf38b21347" containerName="registry-server" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.315319 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.326506 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wtt"] Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.480520 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfz8\" (UniqueName: \"kubernetes.io/projected/005bad27-7d0b-4faa-830c-2fdf9f2923a0-kube-api-access-hqfz8\") pod \"redhat-marketplace-q7wtt\" (UID: \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\") " pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.480980 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005bad27-7d0b-4faa-830c-2fdf9f2923a0-utilities\") pod \"redhat-marketplace-q7wtt\" (UID: \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\") " pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.481207 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005bad27-7d0b-4faa-830c-2fdf9f2923a0-catalog-content\") pod \"redhat-marketplace-q7wtt\" (UID: \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\") " pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.583571 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005bad27-7d0b-4faa-830c-2fdf9f2923a0-catalog-content\") pod \"redhat-marketplace-q7wtt\" (UID: \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\") " pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.583736 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfz8\" (UniqueName: \"kubernetes.io/projected/005bad27-7d0b-4faa-830c-2fdf9f2923a0-kube-api-access-hqfz8\") pod \"redhat-marketplace-q7wtt\" (UID: \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\") " pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.583820 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005bad27-7d0b-4faa-830c-2fdf9f2923a0-utilities\") pod \"redhat-marketplace-q7wtt\" (UID: \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\") " pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.584477 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005bad27-7d0b-4faa-830c-2fdf9f2923a0-utilities\") pod \"redhat-marketplace-q7wtt\" (UID: \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\") " pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.584615 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005bad27-7d0b-4faa-830c-2fdf9f2923a0-catalog-content\") pod \"redhat-marketplace-q7wtt\" (UID: \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\") " pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.608413 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfz8\" (UniqueName: \"kubernetes.io/projected/005bad27-7d0b-4faa-830c-2fdf9f2923a0-kube-api-access-hqfz8\") pod \"redhat-marketplace-q7wtt\" (UID: \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\") " pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:01 crc kubenswrapper[4675]: I1121 14:43:01.664603 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.205553 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wtt"] Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.296771 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-288mk"] Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.300531 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.308288 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-288mk"] Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.403519 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wtt" event={"ID":"005bad27-7d0b-4faa-830c-2fdf9f2923a0","Type":"ContainerStarted","Data":"a2cc23f5dff1c1e7af30667cb01eacb6fcc59d8be022317fa0dc94099d3932c9"} Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.410356 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32533d06-79cd-406e-bfe6-277f83e3b992-utilities\") pod \"redhat-operators-288mk\" (UID: \"32533d06-79cd-406e-bfe6-277f83e3b992\") " pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.410407 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l7lp\" (UniqueName: \"kubernetes.io/projected/32533d06-79cd-406e-bfe6-277f83e3b992-kube-api-access-2l7lp\") pod \"redhat-operators-288mk\" (UID: \"32533d06-79cd-406e-bfe6-277f83e3b992\") " pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.410582 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32533d06-79cd-406e-bfe6-277f83e3b992-catalog-content\") pod \"redhat-operators-288mk\" (UID: \"32533d06-79cd-406e-bfe6-277f83e3b992\") " pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.514309 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32533d06-79cd-406e-bfe6-277f83e3b992-utilities\") pod \"redhat-operators-288mk\" (UID: \"32533d06-79cd-406e-bfe6-277f83e3b992\") " pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.514371 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l7lp\" (UniqueName: \"kubernetes.io/projected/32533d06-79cd-406e-bfe6-277f83e3b992-kube-api-access-2l7lp\") pod \"redhat-operators-288mk\" (UID: \"32533d06-79cd-406e-bfe6-277f83e3b992\") " pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.514534 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32533d06-79cd-406e-bfe6-277f83e3b992-catalog-content\") pod \"redhat-operators-288mk\" (UID: \"32533d06-79cd-406e-bfe6-277f83e3b992\") " pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.514958 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32533d06-79cd-406e-bfe6-277f83e3b992-utilities\") pod \"redhat-operators-288mk\" (UID: \"32533d06-79cd-406e-bfe6-277f83e3b992\") " pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.515011 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32533d06-79cd-406e-bfe6-277f83e3b992-catalog-content\") pod \"redhat-operators-288mk\" (UID: \"32533d06-79cd-406e-bfe6-277f83e3b992\") " pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.534874 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l7lp\" (UniqueName: \"kubernetes.io/projected/32533d06-79cd-406e-bfe6-277f83e3b992-kube-api-access-2l7lp\") pod \"redhat-operators-288mk\" (UID: \"32533d06-79cd-406e-bfe6-277f83e3b992\") " pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:02 crc kubenswrapper[4675]: I1121 14:43:02.754842 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:03 crc kubenswrapper[4675]: I1121 14:43:03.258727 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-288mk"] Nov 21 14:43:03 crc kubenswrapper[4675]: I1121 14:43:03.419788 4675 generic.go:334] "Generic (PLEG): container finished" podID="005bad27-7d0b-4faa-830c-2fdf9f2923a0" containerID="34b65b86ed2b0a4f169ab6860038bcca45ef1e0d7c34db0115f48fa7c27d9937" exitCode=0 Nov 21 14:43:03 crc kubenswrapper[4675]: I1121 14:43:03.419899 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wtt" event={"ID":"005bad27-7d0b-4faa-830c-2fdf9f2923a0","Type":"ContainerDied","Data":"34b65b86ed2b0a4f169ab6860038bcca45ef1e0d7c34db0115f48fa7c27d9937"} Nov 21 14:43:03 crc kubenswrapper[4675]: I1121 14:43:03.425485 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-288mk" event={"ID":"32533d06-79cd-406e-bfe6-277f83e3b992","Type":"ContainerStarted","Data":"22984107c7ddacc9227f78b50828af3e58d23fb36b8f631034e5a72c5b173195"} Nov 21 14:43:04 crc kubenswrapper[4675]: I1121 14:43:04.441632 4675 generic.go:334] "Generic (PLEG): container finished" podID="32533d06-79cd-406e-bfe6-277f83e3b992" containerID="5ad75699c315d2d6fade3fc3ca3ee488dcbfa8b61de97068a3cc5d59138820b4" exitCode=0 Nov 21 14:43:04 crc kubenswrapper[4675]: I1121 14:43:04.441695 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-288mk" event={"ID":"32533d06-79cd-406e-bfe6-277f83e3b992","Type":"ContainerDied","Data":"5ad75699c315d2d6fade3fc3ca3ee488dcbfa8b61de97068a3cc5d59138820b4"} Nov 21 14:43:04 crc kubenswrapper[4675]: I1121 14:43:04.447103 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wtt" event={"ID":"005bad27-7d0b-4faa-830c-2fdf9f2923a0","Type":"ContainerStarted","Data":"e09b810a01be5af08f68a67537ea8c7290ec57581ef783b6380dfe66d9274761"} Nov 21 14:43:05 crc kubenswrapper[4675]: I1121 14:43:05.467430 4675 generic.go:334] "Generic (PLEG): container finished" podID="005bad27-7d0b-4faa-830c-2fdf9f2923a0" containerID="e09b810a01be5af08f68a67537ea8c7290ec57581ef783b6380dfe66d9274761" exitCode=0 Nov 21 14:43:05 crc kubenswrapper[4675]: I1121 14:43:05.468086 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wtt" event={"ID":"005bad27-7d0b-4faa-830c-2fdf9f2923a0","Type":"ContainerDied","Data":"e09b810a01be5af08f68a67537ea8c7290ec57581ef783b6380dfe66d9274761"} Nov 21 14:43:05 crc kubenswrapper[4675]: I1121 14:43:05.473670 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-288mk" event={"ID":"32533d06-79cd-406e-bfe6-277f83e3b992","Type":"ContainerStarted","Data":"2fcb05e6d2af7bbfd33886d274843e9516446b1ecc46761010029d4a170b98ca"} Nov 21 14:43:06 crc kubenswrapper[4675]: I1121 14:43:06.489386 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wtt" event={"ID":"005bad27-7d0b-4faa-830c-2fdf9f2923a0","Type":"ContainerStarted","Data":"e2a0a764757c960f079535b8f1dba65d76e6ed21ad21c16d9d25fe8e488cdad1"} Nov 21 14:43:07 crc kubenswrapper[4675]: I1121 14:43:07.530359 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q7wtt" podStartSLOduration=3.854455444 podStartE2EDuration="6.530341635s" podCreationTimestamp="2025-11-21 14:43:01 +0000 UTC" firstStartedPulling="2025-11-21 14:43:03.432219676 +0000 UTC m=+4260.158634403" lastFinishedPulling="2025-11-21 14:43:06.108105857 +0000 UTC m=+4262.834520594" observedRunningTime="2025-11-21 14:43:07.518917937 +0000 UTC m=+4264.245332664" watchObservedRunningTime="2025-11-21 14:43:07.530341635 +0000 UTC m=+4264.256756362" Nov 21 14:43:11 crc kubenswrapper[4675]: I1121 14:43:11.665475 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:11 crc kubenswrapper[4675]: I1121 14:43:11.666255 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:11 crc kubenswrapper[4675]: I1121 14:43:11.753943 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:13 crc kubenswrapper[4675]: I1121 14:43:13.357278 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:13 crc kubenswrapper[4675]: I1121 14:43:13.410725 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wtt"] Nov 21 14:43:13 crc kubenswrapper[4675]: I1121 14:43:13.610345 4675 generic.go:334] "Generic (PLEG): container finished" podID="32533d06-79cd-406e-bfe6-277f83e3b992" containerID="2fcb05e6d2af7bbfd33886d274843e9516446b1ecc46761010029d4a170b98ca" exitCode=0 Nov 21 14:43:13 crc kubenswrapper[4675]: I1121 14:43:13.610438 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-288mk" event={"ID":"32533d06-79cd-406e-bfe6-277f83e3b992","Type":"ContainerDied","Data":"2fcb05e6d2af7bbfd33886d274843e9516446b1ecc46761010029d4a170b98ca"} Nov 21 14:43:14 crc kubenswrapper[4675]: I1121 14:43:14.625942 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-288mk" event={"ID":"32533d06-79cd-406e-bfe6-277f83e3b992","Type":"ContainerStarted","Data":"57660684c21cb97b12ffa0140cde254da6937c303b6163c27c755917f25786b2"} Nov 21 14:43:14 crc kubenswrapper[4675]: I1121 14:43:14.626396 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q7wtt" podUID="005bad27-7d0b-4faa-830c-2fdf9f2923a0" containerName="registry-server" containerID="cri-o://e2a0a764757c960f079535b8f1dba65d76e6ed21ad21c16d9d25fe8e488cdad1" gracePeriod=2 Nov 21 14:43:14 crc kubenswrapper[4675]: I1121 14:43:14.654805 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-288mk" podStartSLOduration=3.090679577 podStartE2EDuration="12.654785192s" podCreationTimestamp="2025-11-21 14:43:02 +0000 UTC" firstStartedPulling="2025-11-21 14:43:04.446049753 +0000 UTC m=+4261.172464490" lastFinishedPulling="2025-11-21 14:43:14.010155368 +0000 UTC m=+4270.736570105" observedRunningTime="2025-11-21 14:43:14.651906629 +0000 UTC m=+4271.378321356" watchObservedRunningTime="2025-11-21 14:43:14.654785192 +0000 UTC m=+4271.381199919" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.277324 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.349308 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqfz8\" (UniqueName: \"kubernetes.io/projected/005bad27-7d0b-4faa-830c-2fdf9f2923a0-kube-api-access-hqfz8\") pod \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\" (UID: \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\") " Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.349378 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005bad27-7d0b-4faa-830c-2fdf9f2923a0-catalog-content\") pod \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\" (UID: \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\") " Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.349789 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005bad27-7d0b-4faa-830c-2fdf9f2923a0-utilities\") pod \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\" (UID: \"005bad27-7d0b-4faa-830c-2fdf9f2923a0\") " Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.351613 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/005bad27-7d0b-4faa-830c-2fdf9f2923a0-utilities" (OuterVolumeSpecName: "utilities") pod "005bad27-7d0b-4faa-830c-2fdf9f2923a0" (UID: "005bad27-7d0b-4faa-830c-2fdf9f2923a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.359868 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005bad27-7d0b-4faa-830c-2fdf9f2923a0-kube-api-access-hqfz8" (OuterVolumeSpecName: "kube-api-access-hqfz8") pod "005bad27-7d0b-4faa-830c-2fdf9f2923a0" (UID: "005bad27-7d0b-4faa-830c-2fdf9f2923a0"). InnerVolumeSpecName "kube-api-access-hqfz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.373704 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/005bad27-7d0b-4faa-830c-2fdf9f2923a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "005bad27-7d0b-4faa-830c-2fdf9f2923a0" (UID: "005bad27-7d0b-4faa-830c-2fdf9f2923a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.453551 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005bad27-7d0b-4faa-830c-2fdf9f2923a0-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.453581 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqfz8\" (UniqueName: \"kubernetes.io/projected/005bad27-7d0b-4faa-830c-2fdf9f2923a0-kube-api-access-hqfz8\") on node \"crc\" DevicePath \"\"" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.453592 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005bad27-7d0b-4faa-830c-2fdf9f2923a0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.639504 4675 generic.go:334] "Generic (PLEG): container finished" podID="005bad27-7d0b-4faa-830c-2fdf9f2923a0" containerID="e2a0a764757c960f079535b8f1dba65d76e6ed21ad21c16d9d25fe8e488cdad1" exitCode=0 Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.639580 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7wtt" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.639573 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wtt" event={"ID":"005bad27-7d0b-4faa-830c-2fdf9f2923a0","Type":"ContainerDied","Data":"e2a0a764757c960f079535b8f1dba65d76e6ed21ad21c16d9d25fe8e488cdad1"} Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.639941 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7wtt" event={"ID":"005bad27-7d0b-4faa-830c-2fdf9f2923a0","Type":"ContainerDied","Data":"a2cc23f5dff1c1e7af30667cb01eacb6fcc59d8be022317fa0dc94099d3932c9"} Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.639979 4675 scope.go:117] "RemoveContainer" containerID="e2a0a764757c960f079535b8f1dba65d76e6ed21ad21c16d9d25fe8e488cdad1" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.684539 4675 scope.go:117] "RemoveContainer" containerID="e09b810a01be5af08f68a67537ea8c7290ec57581ef783b6380dfe66d9274761" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.719470 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wtt"] Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.731636 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7wtt"] Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.736191 4675 scope.go:117] "RemoveContainer" containerID="34b65b86ed2b0a4f169ab6860038bcca45ef1e0d7c34db0115f48fa7c27d9937" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.774320 4675 scope.go:117] "RemoveContainer" containerID="e2a0a764757c960f079535b8f1dba65d76e6ed21ad21c16d9d25fe8e488cdad1" Nov 21 14:43:15 crc kubenswrapper[4675]: E1121 14:43:15.775160 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2a0a764757c960f079535b8f1dba65d76e6ed21ad21c16d9d25fe8e488cdad1\": container with ID starting with e2a0a764757c960f079535b8f1dba65d76e6ed21ad21c16d9d25fe8e488cdad1 not found: ID does not exist" containerID="e2a0a764757c960f079535b8f1dba65d76e6ed21ad21c16d9d25fe8e488cdad1" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.775214 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2a0a764757c960f079535b8f1dba65d76e6ed21ad21c16d9d25fe8e488cdad1"} err="failed to get container status \"e2a0a764757c960f079535b8f1dba65d76e6ed21ad21c16d9d25fe8e488cdad1\": rpc error: code = NotFound desc = could not find container \"e2a0a764757c960f079535b8f1dba65d76e6ed21ad21c16d9d25fe8e488cdad1\": container with ID starting with e2a0a764757c960f079535b8f1dba65d76e6ed21ad21c16d9d25fe8e488cdad1 not found: ID does not exist" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.775249 4675 scope.go:117] "RemoveContainer" containerID="e09b810a01be5af08f68a67537ea8c7290ec57581ef783b6380dfe66d9274761" Nov 21 14:43:15 crc kubenswrapper[4675]: E1121 14:43:15.775672 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09b810a01be5af08f68a67537ea8c7290ec57581ef783b6380dfe66d9274761\": container with ID starting with e09b810a01be5af08f68a67537ea8c7290ec57581ef783b6380dfe66d9274761 not found: ID does not exist" containerID="e09b810a01be5af08f68a67537ea8c7290ec57581ef783b6380dfe66d9274761" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.775711 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09b810a01be5af08f68a67537ea8c7290ec57581ef783b6380dfe66d9274761"} err="failed to get container status \"e09b810a01be5af08f68a67537ea8c7290ec57581ef783b6380dfe66d9274761\": rpc error: code = NotFound desc = could not find container \"e09b810a01be5af08f68a67537ea8c7290ec57581ef783b6380dfe66d9274761\": container with ID starting with e09b810a01be5af08f68a67537ea8c7290ec57581ef783b6380dfe66d9274761 not found: ID does not exist" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.775736 4675 scope.go:117] "RemoveContainer" containerID="34b65b86ed2b0a4f169ab6860038bcca45ef1e0d7c34db0115f48fa7c27d9937" Nov 21 14:43:15 crc kubenswrapper[4675]: E1121 14:43:15.776337 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34b65b86ed2b0a4f169ab6860038bcca45ef1e0d7c34db0115f48fa7c27d9937\": container with ID starting with 34b65b86ed2b0a4f169ab6860038bcca45ef1e0d7c34db0115f48fa7c27d9937 not found: ID does not exist" containerID="34b65b86ed2b0a4f169ab6860038bcca45ef1e0d7c34db0115f48fa7c27d9937" Nov 21 14:43:15 crc kubenswrapper[4675]: I1121 14:43:15.776372 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b65b86ed2b0a4f169ab6860038bcca45ef1e0d7c34db0115f48fa7c27d9937"} err="failed to get container status \"34b65b86ed2b0a4f169ab6860038bcca45ef1e0d7c34db0115f48fa7c27d9937\": rpc error: code = NotFound desc = could not find container \"34b65b86ed2b0a4f169ab6860038bcca45ef1e0d7c34db0115f48fa7c27d9937\": container with ID starting with 34b65b86ed2b0a4f169ab6860038bcca45ef1e0d7c34db0115f48fa7c27d9937 not found: ID does not exist" Nov 21 14:43:16 crc kubenswrapper[4675]: I1121 14:43:16.865743 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="005bad27-7d0b-4faa-830c-2fdf9f2923a0" path="/var/lib/kubelet/pods/005bad27-7d0b-4faa-830c-2fdf9f2923a0/volumes" Nov 21 14:43:22 crc kubenswrapper[4675]: I1121 14:43:22.755587 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:22 crc kubenswrapper[4675]: I1121 14:43:22.756122 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:22 crc kubenswrapper[4675]: I1121 14:43:22.836845 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:23 crc kubenswrapper[4675]: I1121 14:43:23.828291 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:23 crc kubenswrapper[4675]: I1121 14:43:23.893773 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-288mk"] Nov 21 14:43:25 crc kubenswrapper[4675]: I1121 14:43:25.770696 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-288mk" podUID="32533d06-79cd-406e-bfe6-277f83e3b992" containerName="registry-server" containerID="cri-o://57660684c21cb97b12ffa0140cde254da6937c303b6163c27c755917f25786b2" gracePeriod=2 Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.342134 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.464093 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32533d06-79cd-406e-bfe6-277f83e3b992-utilities\") pod \"32533d06-79cd-406e-bfe6-277f83e3b992\" (UID: \"32533d06-79cd-406e-bfe6-277f83e3b992\") " Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.464502 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32533d06-79cd-406e-bfe6-277f83e3b992-catalog-content\") pod \"32533d06-79cd-406e-bfe6-277f83e3b992\" (UID: \"32533d06-79cd-406e-bfe6-277f83e3b992\") " Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.464752 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l7lp\" (UniqueName: \"kubernetes.io/projected/32533d06-79cd-406e-bfe6-277f83e3b992-kube-api-access-2l7lp\") pod \"32533d06-79cd-406e-bfe6-277f83e3b992\" (UID: \"32533d06-79cd-406e-bfe6-277f83e3b992\") " Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.465621 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32533d06-79cd-406e-bfe6-277f83e3b992-utilities" (OuterVolumeSpecName: "utilities") pod "32533d06-79cd-406e-bfe6-277f83e3b992" (UID: "32533d06-79cd-406e-bfe6-277f83e3b992"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.467370 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32533d06-79cd-406e-bfe6-277f83e3b992-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.476295 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32533d06-79cd-406e-bfe6-277f83e3b992-kube-api-access-2l7lp" (OuterVolumeSpecName: "kube-api-access-2l7lp") pod "32533d06-79cd-406e-bfe6-277f83e3b992" (UID: "32533d06-79cd-406e-bfe6-277f83e3b992"). InnerVolumeSpecName "kube-api-access-2l7lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.570114 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l7lp\" (UniqueName: \"kubernetes.io/projected/32533d06-79cd-406e-bfe6-277f83e3b992-kube-api-access-2l7lp\") on node \"crc\" DevicePath \"\"" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.570746 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32533d06-79cd-406e-bfe6-277f83e3b992-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32533d06-79cd-406e-bfe6-277f83e3b992" (UID: "32533d06-79cd-406e-bfe6-277f83e3b992"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.672083 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32533d06-79cd-406e-bfe6-277f83e3b992-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.787775 4675 generic.go:334] "Generic (PLEG): container finished" podID="32533d06-79cd-406e-bfe6-277f83e3b992" containerID="57660684c21cb97b12ffa0140cde254da6937c303b6163c27c755917f25786b2" exitCode=0 Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.787826 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-288mk" event={"ID":"32533d06-79cd-406e-bfe6-277f83e3b992","Type":"ContainerDied","Data":"57660684c21cb97b12ffa0140cde254da6937c303b6163c27c755917f25786b2"} Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.787857 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-288mk" event={"ID":"32533d06-79cd-406e-bfe6-277f83e3b992","Type":"ContainerDied","Data":"22984107c7ddacc9227f78b50828af3e58d23fb36b8f631034e5a72c5b173195"} Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.787874 4675 scope.go:117] "RemoveContainer" containerID="57660684c21cb97b12ffa0140cde254da6937c303b6163c27c755917f25786b2" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.787875 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-288mk" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.818952 4675 scope.go:117] "RemoveContainer" containerID="2fcb05e6d2af7bbfd33886d274843e9516446b1ecc46761010029d4a170b98ca" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.866729 4675 scope.go:117] "RemoveContainer" containerID="5ad75699c315d2d6fade3fc3ca3ee488dcbfa8b61de97068a3cc5d59138820b4" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.879198 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-288mk"] Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.879595 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-288mk"] Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.913266 4675 scope.go:117] "RemoveContainer" containerID="57660684c21cb97b12ffa0140cde254da6937c303b6163c27c755917f25786b2" Nov 21 14:43:26 crc kubenswrapper[4675]: E1121 14:43:26.914043 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57660684c21cb97b12ffa0140cde254da6937c303b6163c27c755917f25786b2\": container with ID starting with 57660684c21cb97b12ffa0140cde254da6937c303b6163c27c755917f25786b2 not found: ID does not exist" containerID="57660684c21cb97b12ffa0140cde254da6937c303b6163c27c755917f25786b2" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.914128 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57660684c21cb97b12ffa0140cde254da6937c303b6163c27c755917f25786b2"} err="failed to get container status \"57660684c21cb97b12ffa0140cde254da6937c303b6163c27c755917f25786b2\": rpc error: code = NotFound desc = could not find container \"57660684c21cb97b12ffa0140cde254da6937c303b6163c27c755917f25786b2\": container with ID starting with 57660684c21cb97b12ffa0140cde254da6937c303b6163c27c755917f25786b2 not found: ID does not exist" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.914165 4675 scope.go:117] "RemoveContainer" containerID="2fcb05e6d2af7bbfd33886d274843e9516446b1ecc46761010029d4a170b98ca" Nov 21 14:43:26 crc kubenswrapper[4675]: E1121 14:43:26.914631 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fcb05e6d2af7bbfd33886d274843e9516446b1ecc46761010029d4a170b98ca\": container with ID starting with 2fcb05e6d2af7bbfd33886d274843e9516446b1ecc46761010029d4a170b98ca not found: ID does not exist" containerID="2fcb05e6d2af7bbfd33886d274843e9516446b1ecc46761010029d4a170b98ca" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.914785 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcb05e6d2af7bbfd33886d274843e9516446b1ecc46761010029d4a170b98ca"} err="failed to get container status \"2fcb05e6d2af7bbfd33886d274843e9516446b1ecc46761010029d4a170b98ca\": rpc error: code = NotFound desc = could not find container \"2fcb05e6d2af7bbfd33886d274843e9516446b1ecc46761010029d4a170b98ca\": container with ID starting with 2fcb05e6d2af7bbfd33886d274843e9516446b1ecc46761010029d4a170b98ca not found: ID does not exist" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.914903 4675 scope.go:117] "RemoveContainer" containerID="5ad75699c315d2d6fade3fc3ca3ee488dcbfa8b61de97068a3cc5d59138820b4" Nov 21 14:43:26 crc kubenswrapper[4675]: E1121 14:43:26.915351 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ad75699c315d2d6fade3fc3ca3ee488dcbfa8b61de97068a3cc5d59138820b4\": container with ID starting with 5ad75699c315d2d6fade3fc3ca3ee488dcbfa8b61de97068a3cc5d59138820b4 not found: ID does not exist" containerID="5ad75699c315d2d6fade3fc3ca3ee488dcbfa8b61de97068a3cc5d59138820b4" Nov 21 14:43:26 crc kubenswrapper[4675]: I1121 14:43:26.915548 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad75699c315d2d6fade3fc3ca3ee488dcbfa8b61de97068a3cc5d59138820b4"} err="failed to get container status \"5ad75699c315d2d6fade3fc3ca3ee488dcbfa8b61de97068a3cc5d59138820b4\": rpc error: code = NotFound desc = could not find container \"5ad75699c315d2d6fade3fc3ca3ee488dcbfa8b61de97068a3cc5d59138820b4\": container with ID starting with 5ad75699c315d2d6fade3fc3ca3ee488dcbfa8b61de97068a3cc5d59138820b4 not found: ID does not exist" Nov 21 14:43:28 crc kubenswrapper[4675]: I1121 14:43:28.869643 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32533d06-79cd-406e-bfe6-277f83e3b992" path="/var/lib/kubelet/pods/32533d06-79cd-406e-bfe6-277f83e3b992/volumes" Nov 21 14:44:16 crc kubenswrapper[4675]: I1121 14:44:16.137037 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:44:16 crc kubenswrapper[4675]: I1121 14:44:16.137661 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:44:46 crc kubenswrapper[4675]: I1121 14:44:46.136434 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:44:46 crc kubenswrapper[4675]: I1121 14:44:46.137054 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.169633 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt"] Nov 21 14:45:00 crc kubenswrapper[4675]: E1121 14:45:00.170580 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32533d06-79cd-406e-bfe6-277f83e3b992" containerName="extract-content" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.170596 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="32533d06-79cd-406e-bfe6-277f83e3b992" containerName="extract-content" Nov 21 14:45:00 crc kubenswrapper[4675]: E1121 14:45:00.170625 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32533d06-79cd-406e-bfe6-277f83e3b992" containerName="extract-utilities" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.170631 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="32533d06-79cd-406e-bfe6-277f83e3b992" containerName="extract-utilities" Nov 21 14:45:00 crc kubenswrapper[4675]: E1121 14:45:00.170645 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005bad27-7d0b-4faa-830c-2fdf9f2923a0" containerName="extract-content" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.170651 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="005bad27-7d0b-4faa-830c-2fdf9f2923a0" containerName="extract-content" Nov 21 14:45:00 crc kubenswrapper[4675]: E1121 14:45:00.170668 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32533d06-79cd-406e-bfe6-277f83e3b992" containerName="registry-server" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.170675 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="32533d06-79cd-406e-bfe6-277f83e3b992" containerName="registry-server" Nov 21 14:45:00 crc kubenswrapper[4675]: E1121 14:45:00.170695 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005bad27-7d0b-4faa-830c-2fdf9f2923a0" containerName="extract-utilities" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.170700 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="005bad27-7d0b-4faa-830c-2fdf9f2923a0" containerName="extract-utilities" Nov 21 14:45:00 crc kubenswrapper[4675]: E1121 14:45:00.170709 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005bad27-7d0b-4faa-830c-2fdf9f2923a0" containerName="registry-server" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.170715 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="005bad27-7d0b-4faa-830c-2fdf9f2923a0" containerName="registry-server" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.170933 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="005bad27-7d0b-4faa-830c-2fdf9f2923a0" containerName="registry-server" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.170956 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="32533d06-79cd-406e-bfe6-277f83e3b992" containerName="registry-server" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.171847 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.174612 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.174824 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.188721 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt"] Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.266048 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74z4c\" (UniqueName: \"kubernetes.io/projected/15891d1f-9fd5-42aa-848f-17801cb34ec7-kube-api-access-74z4c\") pod \"collect-profiles-29395605-b4fvt\" (UID: \"15891d1f-9fd5-42aa-848f-17801cb34ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.266164 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15891d1f-9fd5-42aa-848f-17801cb34ec7-config-volume\") pod \"collect-profiles-29395605-b4fvt\" (UID: \"15891d1f-9fd5-42aa-848f-17801cb34ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.266190 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15891d1f-9fd5-42aa-848f-17801cb34ec7-secret-volume\") pod \"collect-profiles-29395605-b4fvt\" (UID: \"15891d1f-9fd5-42aa-848f-17801cb34ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.368948 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74z4c\" (UniqueName: \"kubernetes.io/projected/15891d1f-9fd5-42aa-848f-17801cb34ec7-kube-api-access-74z4c\") pod \"collect-profiles-29395605-b4fvt\" (UID: \"15891d1f-9fd5-42aa-848f-17801cb34ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.369149 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15891d1f-9fd5-42aa-848f-17801cb34ec7-config-volume\") pod \"collect-profiles-29395605-b4fvt\" (UID: \"15891d1f-9fd5-42aa-848f-17801cb34ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.369203 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15891d1f-9fd5-42aa-848f-17801cb34ec7-secret-volume\") pod \"collect-profiles-29395605-b4fvt\" (UID: \"15891d1f-9fd5-42aa-848f-17801cb34ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.370226 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15891d1f-9fd5-42aa-848f-17801cb34ec7-config-volume\") pod \"collect-profiles-29395605-b4fvt\" (UID: \"15891d1f-9fd5-42aa-848f-17801cb34ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.381424 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15891d1f-9fd5-42aa-848f-17801cb34ec7-secret-volume\") pod \"collect-profiles-29395605-b4fvt\" (UID: \"15891d1f-9fd5-42aa-848f-17801cb34ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.406808 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74z4c\" (UniqueName: \"kubernetes.io/projected/15891d1f-9fd5-42aa-848f-17801cb34ec7-kube-api-access-74z4c\") pod \"collect-profiles-29395605-b4fvt\" (UID: \"15891d1f-9fd5-42aa-848f-17801cb34ec7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" Nov 21 14:45:00 crc kubenswrapper[4675]: I1121 14:45:00.502592 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" Nov 21 14:45:01 crc kubenswrapper[4675]: I1121 14:45:01.085658 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt"] Nov 21 14:45:02 crc kubenswrapper[4675]: I1121 14:45:02.011816 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" event={"ID":"15891d1f-9fd5-42aa-848f-17801cb34ec7","Type":"ContainerStarted","Data":"2d900c1d79ef2ad73e096336d9597345fbd3ec387b8f3c7c68ebd7d46861980e"} Nov 21 14:45:02 crc kubenswrapper[4675]: I1121 14:45:02.011912 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" event={"ID":"15891d1f-9fd5-42aa-848f-17801cb34ec7","Type":"ContainerStarted","Data":"a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655"} Nov 21 14:45:02 crc kubenswrapper[4675]: I1121 14:45:02.049296 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" podStartSLOduration=2.049266089 podStartE2EDuration="2.049266089s" podCreationTimestamp="2025-11-21 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:45:02.03465314 +0000 UTC m=+4378.761067907" watchObservedRunningTime="2025-11-21 14:45:02.049266089 +0000 UTC m=+4378.775680856" Nov 21 14:45:03 crc kubenswrapper[4675]: I1121 14:45:03.052289 4675 generic.go:334] "Generic (PLEG): container finished" podID="15891d1f-9fd5-42aa-848f-17801cb34ec7" containerID="2d900c1d79ef2ad73e096336d9597345fbd3ec387b8f3c7c68ebd7d46861980e" exitCode=0 Nov 21 14:45:03 crc kubenswrapper[4675]: I1121 14:45:03.053155 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" event={"ID":"15891d1f-9fd5-42aa-848f-17801cb34ec7","Type":"ContainerDied","Data":"2d900c1d79ef2ad73e096336d9597345fbd3ec387b8f3c7c68ebd7d46861980e"} Nov 21 14:45:04 crc kubenswrapper[4675]: I1121 14:45:04.500772 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" Nov 21 14:45:04 crc kubenswrapper[4675]: I1121 14:45:04.584871 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74z4c\" (UniqueName: \"kubernetes.io/projected/15891d1f-9fd5-42aa-848f-17801cb34ec7-kube-api-access-74z4c\") pod \"15891d1f-9fd5-42aa-848f-17801cb34ec7\" (UID: \"15891d1f-9fd5-42aa-848f-17801cb34ec7\") " Nov 21 14:45:04 crc kubenswrapper[4675]: I1121 14:45:04.585295 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15891d1f-9fd5-42aa-848f-17801cb34ec7-config-volume\") pod \"15891d1f-9fd5-42aa-848f-17801cb34ec7\" (UID: \"15891d1f-9fd5-42aa-848f-17801cb34ec7\") " Nov 21 14:45:04 crc kubenswrapper[4675]: I1121 14:45:04.585501 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15891d1f-9fd5-42aa-848f-17801cb34ec7-secret-volume\") pod \"15891d1f-9fd5-42aa-848f-17801cb34ec7\" (UID: \"15891d1f-9fd5-42aa-848f-17801cb34ec7\") " Nov 21 14:45:04 crc kubenswrapper[4675]: I1121 14:45:04.586236 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15891d1f-9fd5-42aa-848f-17801cb34ec7-config-volume" (OuterVolumeSpecName: "config-volume") pod "15891d1f-9fd5-42aa-848f-17801cb34ec7" (UID: "15891d1f-9fd5-42aa-848f-17801cb34ec7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:45:04 crc kubenswrapper[4675]: I1121 14:45:04.596581 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15891d1f-9fd5-42aa-848f-17801cb34ec7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "15891d1f-9fd5-42aa-848f-17801cb34ec7" (UID: "15891d1f-9fd5-42aa-848f-17801cb34ec7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:45:04 crc kubenswrapper[4675]: I1121 14:45:04.597104 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15891d1f-9fd5-42aa-848f-17801cb34ec7-kube-api-access-74z4c" (OuterVolumeSpecName: "kube-api-access-74z4c") pod "15891d1f-9fd5-42aa-848f-17801cb34ec7" (UID: "15891d1f-9fd5-42aa-848f-17801cb34ec7"). InnerVolumeSpecName "kube-api-access-74z4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:45:04 crc kubenswrapper[4675]: I1121 14:45:04.690061 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15891d1f-9fd5-42aa-848f-17801cb34ec7-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:45:04 crc kubenswrapper[4675]: I1121 14:45:04.690354 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15891d1f-9fd5-42aa-848f-17801cb34ec7-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:45:04 crc kubenswrapper[4675]: I1121 14:45:04.690413 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74z4c\" (UniqueName: \"kubernetes.io/projected/15891d1f-9fd5-42aa-848f-17801cb34ec7-kube-api-access-74z4c\") on node \"crc\" DevicePath \"\"" Nov 21 14:45:05 crc kubenswrapper[4675]: I1121 14:45:05.089813 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" event={"ID":"15891d1f-9fd5-42aa-848f-17801cb34ec7","Type":"ContainerDied","Data":"a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655"} Nov 21 14:45:05 crc kubenswrapper[4675]: I1121 14:45:05.089902 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655" Nov 21 14:45:05 crc kubenswrapper[4675]: I1121 14:45:05.090165 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-b4fvt" Nov 21 14:45:05 crc kubenswrapper[4675]: I1121 14:45:05.135543 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v"] Nov 21 14:45:05 crc kubenswrapper[4675]: I1121 14:45:05.147444 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395560-flp8v"] Nov 21 14:45:06 crc kubenswrapper[4675]: I1121 14:45:06.867673 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a79992-9dba-48d6-96b3-ceeebc63cedd" path="/var/lib/kubelet/pods/01a79992-9dba-48d6-96b3-ceeebc63cedd/volumes" Nov 21 14:45:10 crc kubenswrapper[4675]: E1121 14:45:10.826804 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice/crio-a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice\": RecentStats: unable to find data in memory cache]" Nov 21 14:45:13 crc kubenswrapper[4675]: E1121 14:45:13.067827 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice/crio-a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655\": RecentStats: unable to find data in memory cache]" Nov 21 14:45:16 crc kubenswrapper[4675]: I1121 14:45:16.135986 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:45:16 crc kubenswrapper[4675]: I1121 14:45:16.136398 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:45:16 crc kubenswrapper[4675]: I1121 14:45:16.136441 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 14:45:16 crc kubenswrapper[4675]: I1121 14:45:16.137249 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:45:16 crc kubenswrapper[4675]: I1121 14:45:16.137297 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" gracePeriod=600 Nov 21 14:45:16 crc kubenswrapper[4675]: E1121 14:45:16.285240 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:45:17 crc kubenswrapper[4675]: I1121 14:45:17.234874 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" exitCode=0 Nov 21 14:45:17 crc kubenswrapper[4675]: I1121 14:45:17.234956 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8"} Nov 21 14:45:17 crc kubenswrapper[4675]: I1121 14:45:17.235911 4675 scope.go:117] "RemoveContainer" containerID="269816fb93c202a0c84f4606751e8acce9638cd3d82728a5f2d8489920decfbb" Nov 21 14:45:17 crc kubenswrapper[4675]: I1121 14:45:17.236791 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:45:17 crc kubenswrapper[4675]: E1121 14:45:17.237123 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:45:21 crc kubenswrapper[4675]: E1121 14:45:21.148491 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice/crio-a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice\": RecentStats: unable to find data in memory cache]" Nov 21 14:45:27 crc kubenswrapper[4675]: I1121 14:45:27.316475 4675 scope.go:117] "RemoveContainer" containerID="5e7048f2f96279f541c39a0f73a82d035c4ddb920b98f74ab814fdc015ba0c35" Nov 21 14:45:28 crc kubenswrapper[4675]: E1121 14:45:28.350687 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice/crio-a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655\": RecentStats: unable to find data in memory cache]" Nov 21 14:45:31 crc kubenswrapper[4675]: E1121 14:45:31.197547 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice/crio-a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice\": RecentStats: unable to find data in memory cache]" Nov 21 14:45:32 crc kubenswrapper[4675]: I1121 14:45:32.849209 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:45:32 crc kubenswrapper[4675]: E1121 14:45:32.849773 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:45:41 crc kubenswrapper[4675]: E1121 14:45:41.509476 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice/crio-a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice\": RecentStats: unable to find data in memory cache]" Nov 21 14:45:43 crc kubenswrapper[4675]: E1121 14:45:43.072122 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice/crio-a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655\": RecentStats: unable to find data in memory cache]" Nov 21 14:45:47 crc kubenswrapper[4675]: I1121 14:45:47.850239 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:45:47 crc kubenswrapper[4675]: E1121 14:45:47.851416 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:45:48 crc kubenswrapper[4675]: E1121 14:45:48.267915 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice/crio-a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655\": RecentStats: unable to find data in memory cache]" Nov 21 14:45:48 crc kubenswrapper[4675]: E1121 14:45:48.268049 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice/crio-a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice\": RecentStats: unable to find data in memory cache]" Nov 21 14:45:51 crc kubenswrapper[4675]: E1121 14:45:51.573720 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice/crio-a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice\": RecentStats: unable to find data in memory cache]" Nov 21 14:45:58 crc kubenswrapper[4675]: E1121 14:45:58.328887 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice/crio-a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655\": RecentStats: unable to find data in memory cache]" Nov 21 14:45:58 crc kubenswrapper[4675]: I1121 14:45:58.850925 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:45:58 crc kubenswrapper[4675]: E1121 14:45:58.851625 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:46:01 crc kubenswrapper[4675]: E1121 14:46:01.632724 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15891d1f_9fd5_42aa_848f_17801cb34ec7.slice/crio-a8c2aeb564b1ddb5b51b6e1b92ee1c02fa906eae4bff96d694789114cacbb655\": RecentStats: unable to find data in memory cache]" Nov 21 14:46:12 crc kubenswrapper[4675]: I1121 14:46:12.849640 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:46:12 crc kubenswrapper[4675]: E1121 14:46:12.850719 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:46:26 crc kubenswrapper[4675]: I1121 14:46:26.851773 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:46:26 crc kubenswrapper[4675]: E1121 14:46:26.853250 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:46:37 crc kubenswrapper[4675]: I1121 14:46:37.849905 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:46:37 crc kubenswrapper[4675]: E1121 14:46:37.851246 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:46:49 crc kubenswrapper[4675]: I1121 14:46:49.849644 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:46:49 crc kubenswrapper[4675]: E1121 14:46:49.850485 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:47:03 crc kubenswrapper[4675]: I1121 14:47:03.849591 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:47:03 crc kubenswrapper[4675]: E1121 14:47:03.850744 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:47:14 crc kubenswrapper[4675]: I1121 14:47:14.857469 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:47:14 crc kubenswrapper[4675]: E1121 14:47:14.858317 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:47:25 crc kubenswrapper[4675]: I1121 14:47:25.849354 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:47:25 crc kubenswrapper[4675]: E1121 14:47:25.850262 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:47:39 crc kubenswrapper[4675]: I1121 14:47:39.850610 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:47:39 crc kubenswrapper[4675]: E1121 14:47:39.852177 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:47:53 crc kubenswrapper[4675]: I1121 14:47:53.849854 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:47:53 crc kubenswrapper[4675]: E1121 14:47:53.850949 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:48:04 crc kubenswrapper[4675]: I1121 14:48:04.856805 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:48:04 crc kubenswrapper[4675]: E1121 14:48:04.857824 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:48:19 crc kubenswrapper[4675]: I1121 14:48:19.848741 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:48:19 crc kubenswrapper[4675]: E1121 14:48:19.849532 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:48:34 crc kubenswrapper[4675]: I1121 14:48:34.857183 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:48:34 crc kubenswrapper[4675]: E1121 14:48:34.858029 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:48:47 crc kubenswrapper[4675]: I1121 14:48:47.849625 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:48:47 crc kubenswrapper[4675]: E1121 14:48:47.850556 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.361856 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 21 14:48:50 crc kubenswrapper[4675]: E1121 14:48:50.362931 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15891d1f-9fd5-42aa-848f-17801cb34ec7" containerName="collect-profiles" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.362947 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="15891d1f-9fd5-42aa-848f-17801cb34ec7" containerName="collect-profiles" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.363247 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="15891d1f-9fd5-42aa-848f-17801cb34ec7" containerName="collect-profiles" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.364039 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.366455 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.368952 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.369545 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pw5wd" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.369572 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.385737 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.508589 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.508671 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/71faa523-7927-4fc1-bb12-0f787758620a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.508730 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b22t\" (UniqueName: \"kubernetes.io/projected/71faa523-7927-4fc1-bb12-0f787758620a-kube-api-access-5b22t\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.508820 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/71faa523-7927-4fc1-bb12-0f787758620a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.508907 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.508934 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/71faa523-7927-4fc1-bb12-0f787758620a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.509042 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.509098 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71faa523-7927-4fc1-bb12-0f787758620a-config-data\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.509217 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.611814 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.611904 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/71faa523-7927-4fc1-bb12-0f787758620a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.611957 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b22t\" (UniqueName: \"kubernetes.io/projected/71faa523-7927-4fc1-bb12-0f787758620a-kube-api-access-5b22t\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.612019 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/71faa523-7927-4fc1-bb12-0f787758620a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.612110 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.612135 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/71faa523-7927-4fc1-bb12-0f787758620a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.612196 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.612228 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71faa523-7927-4fc1-bb12-0f787758620a-config-data\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.612303 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.612507 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/71faa523-7927-4fc1-bb12-0f787758620a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.612739 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/71faa523-7927-4fc1-bb12-0f787758620a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.614054 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71faa523-7927-4fc1-bb12-0f787758620a-config-data\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.614055 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/71faa523-7927-4fc1-bb12-0f787758620a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.614163 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.620442 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.620507 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.621580 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.633834 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b22t\" (UniqueName: \"kubernetes.io/projected/71faa523-7927-4fc1-bb12-0f787758620a-kube-api-access-5b22t\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.655485 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " pod="openstack/tempest-tests-tempest" Nov 21 14:48:50 crc kubenswrapper[4675]: I1121 14:48:50.691801 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 21 14:48:51 crc kubenswrapper[4675]: I1121 14:48:51.300626 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 21 14:48:51 crc kubenswrapper[4675]: I1121 14:48:51.315540 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:48:51 crc kubenswrapper[4675]: I1121 14:48:51.953801 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"71faa523-7927-4fc1-bb12-0f787758620a","Type":"ContainerStarted","Data":"d52e3f21b8823a47a35cb23b1388e9db5be6cec06f8e9dc0b471924b29459d2b"} Nov 21 14:48:58 crc kubenswrapper[4675]: I1121 14:48:58.850810 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:48:58 crc kubenswrapper[4675]: E1121 14:48:58.854780 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:49:13 crc kubenswrapper[4675]: I1121 14:49:13.849810 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:49:13 crc kubenswrapper[4675]: E1121 14:49:13.850514 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:49:27 crc kubenswrapper[4675]: I1121 14:49:27.849985 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:49:27 crc kubenswrapper[4675]: E1121 14:49:27.851145 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:49:38 crc kubenswrapper[4675]: I1121 14:49:38.850589 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:49:38 crc kubenswrapper[4675]: E1121 14:49:38.851647 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:49:43 crc kubenswrapper[4675]: E1121 14:49:43.809941 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 21 14:49:43 crc kubenswrapper[4675]: E1121 14:49:43.811985 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5b22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(71faa523-7927-4fc1-bb12-0f787758620a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 14:49:43 crc kubenswrapper[4675]: E1121 14:49:43.813402 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="71faa523-7927-4fc1-bb12-0f787758620a" Nov 21 14:49:44 crc kubenswrapper[4675]: E1121 14:49:44.621575 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="71faa523-7927-4fc1-bb12-0f787758620a" Nov 21 14:49:52 crc kubenswrapper[4675]: I1121 14:49:52.850080 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:49:52 crc kubenswrapper[4675]: E1121 14:49:52.851202 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:49:57 crc kubenswrapper[4675]: I1121 14:49:57.638530 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 21 14:50:01 crc kubenswrapper[4675]: I1121 14:50:01.896373 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"71faa523-7927-4fc1-bb12-0f787758620a","Type":"ContainerStarted","Data":"7224b19184a0338249fafd727fcb725b586ee7dd5f53fd0cb299c29069007f47"} Nov 21 14:50:01 crc kubenswrapper[4675]: I1121 14:50:01.928643 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=6.607796514 podStartE2EDuration="1m12.928616137s" podCreationTimestamp="2025-11-21 14:48:49 +0000 UTC" firstStartedPulling="2025-11-21 14:48:51.315355749 +0000 UTC m=+4608.041770476" lastFinishedPulling="2025-11-21 14:49:57.636175372 +0000 UTC m=+4674.362590099" observedRunningTime="2025-11-21 14:50:01.917419625 +0000 UTC m=+4678.643834372" watchObservedRunningTime="2025-11-21 14:50:01.928616137 +0000 UTC m=+4678.655030864" Nov 21 14:50:06 crc kubenswrapper[4675]: I1121 14:50:06.854696 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:50:06 crc kubenswrapper[4675]: E1121 14:50:06.858255 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:50:19 crc kubenswrapper[4675]: I1121 14:50:19.849443 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:50:21 crc kubenswrapper[4675]: I1121 14:50:21.140906 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"2634b22282a4bd918df36e3c8783b1ea95bfdfcff63b623fb73e04f873be6fb3"} Nov 21 14:52:46 crc kubenswrapper[4675]: I1121 14:52:46.138471 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:52:46 crc kubenswrapper[4675]: I1121 14:52:46.139154 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:52:54 crc kubenswrapper[4675]: I1121 14:52:54.610622 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m8bx5"] Nov 21 14:52:54 crc kubenswrapper[4675]: I1121 14:52:54.629225 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:52:54 crc kubenswrapper[4675]: I1121 14:52:54.745177 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1de24bae-0f29-4702-9b6f-e12dab9d73fc-utilities\") pod \"community-operators-m8bx5\" (UID: \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\") " pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:52:54 crc kubenswrapper[4675]: I1121 14:52:54.745549 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v9kx\" (UniqueName: \"kubernetes.io/projected/1de24bae-0f29-4702-9b6f-e12dab9d73fc-kube-api-access-4v9kx\") pod \"community-operators-m8bx5\" (UID: \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\") " pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:52:54 crc kubenswrapper[4675]: I1121 14:52:54.746325 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1de24bae-0f29-4702-9b6f-e12dab9d73fc-catalog-content\") pod \"community-operators-m8bx5\" (UID: \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\") " pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:52:54 crc kubenswrapper[4675]: I1121 14:52:54.834200 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8bx5"] Nov 21 14:52:54 crc kubenswrapper[4675]: I1121 14:52:54.848134 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1de24bae-0f29-4702-9b6f-e12dab9d73fc-utilities\") pod \"community-operators-m8bx5\" (UID: \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\") " pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:52:54 crc kubenswrapper[4675]: I1121 14:52:54.848180 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v9kx\" (UniqueName: \"kubernetes.io/projected/1de24bae-0f29-4702-9b6f-e12dab9d73fc-kube-api-access-4v9kx\") pod \"community-operators-m8bx5\" (UID: \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\") " pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:52:54 crc kubenswrapper[4675]: I1121 14:52:54.848393 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1de24bae-0f29-4702-9b6f-e12dab9d73fc-catalog-content\") pod \"community-operators-m8bx5\" (UID: \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\") " pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:52:54 crc kubenswrapper[4675]: I1121 14:52:54.893792 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1de24bae-0f29-4702-9b6f-e12dab9d73fc-utilities\") pod \"community-operators-m8bx5\" (UID: \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\") " pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:52:54 crc kubenswrapper[4675]: I1121 14:52:54.914196 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1de24bae-0f29-4702-9b6f-e12dab9d73fc-catalog-content\") pod \"community-operators-m8bx5\" (UID: \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\") " pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:52:54 crc kubenswrapper[4675]: I1121 14:52:54.921854 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v9kx\" (UniqueName: \"kubernetes.io/projected/1de24bae-0f29-4702-9b6f-e12dab9d73fc-kube-api-access-4v9kx\") pod \"community-operators-m8bx5\" (UID: \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\") " pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:52:55 crc kubenswrapper[4675]: I1121 14:52:55.031319 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:53:01 crc kubenswrapper[4675]: I1121 14:53:01.087328 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8bx5"] Nov 21 14:53:01 crc kubenswrapper[4675]: I1121 14:53:01.955205 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8bx5" event={"ID":"1de24bae-0f29-4702-9b6f-e12dab9d73fc","Type":"ContainerStarted","Data":"eb85a036788a0031c29da50bc53e83fe8a25b5d51452a523a7b47d6f906742b8"} Nov 21 14:53:02 crc kubenswrapper[4675]: I1121 14:53:02.977829 4675 generic.go:334] "Generic (PLEG): container finished" podID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerID="36d7943abfa81f2fa01dfe9a33bda3c5e54cc5a98b3946fbe0333159b638f67f" exitCode=0 Nov 21 14:53:02 crc kubenswrapper[4675]: I1121 14:53:02.978261 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8bx5" event={"ID":"1de24bae-0f29-4702-9b6f-e12dab9d73fc","Type":"ContainerDied","Data":"36d7943abfa81f2fa01dfe9a33bda3c5e54cc5a98b3946fbe0333159b638f67f"} Nov 21 14:53:05 crc kubenswrapper[4675]: I1121 14:53:05.865929 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bbqzj"] Nov 21 14:53:05 crc kubenswrapper[4675]: I1121 14:53:05.869818 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:53:05 crc kubenswrapper[4675]: I1121 14:53:05.878937 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bbqzj"] Nov 21 14:53:06 crc kubenswrapper[4675]: I1121 14:53:06.011253 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-catalog-content\") pod \"redhat-operators-bbqzj\" (UID: \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\") " pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:53:06 crc kubenswrapper[4675]: I1121 14:53:06.011710 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-utilities\") pod \"redhat-operators-bbqzj\" (UID: \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\") " pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:53:06 crc kubenswrapper[4675]: I1121 14:53:06.012103 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssmd6\" (UniqueName: \"kubernetes.io/projected/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-kube-api-access-ssmd6\") pod \"redhat-operators-bbqzj\" (UID: \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\") " pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:53:06 crc kubenswrapper[4675]: I1121 14:53:06.114233 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-utilities\") pod \"redhat-operators-bbqzj\" (UID: \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\") " pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:53:06 crc kubenswrapper[4675]: I1121 14:53:06.114331 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssmd6\" (UniqueName: \"kubernetes.io/projected/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-kube-api-access-ssmd6\") pod \"redhat-operators-bbqzj\" (UID: \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\") " pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:53:06 crc kubenswrapper[4675]: I1121 14:53:06.114383 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-catalog-content\") pod \"redhat-operators-bbqzj\" (UID: \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\") " pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:53:06 crc kubenswrapper[4675]: I1121 14:53:06.114868 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-utilities\") pod \"redhat-operators-bbqzj\" (UID: \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\") " pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:53:06 crc kubenswrapper[4675]: I1121 14:53:06.115035 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-catalog-content\") pod \"redhat-operators-bbqzj\" (UID: \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\") " pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:53:06 crc kubenswrapper[4675]: I1121 14:53:06.142832 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssmd6\" (UniqueName: \"kubernetes.io/projected/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-kube-api-access-ssmd6\") pod \"redhat-operators-bbqzj\" (UID: \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\") " pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:53:06 crc kubenswrapper[4675]: I1121 14:53:06.325777 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:53:07 crc kubenswrapper[4675]: I1121 14:53:07.024640 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8bx5" event={"ID":"1de24bae-0f29-4702-9b6f-e12dab9d73fc","Type":"ContainerStarted","Data":"aa57635ba80db781862da30ed976eb31762969d4580934aba3901c87e0084814"} Nov 21 14:53:07 crc kubenswrapper[4675]: W1121 14:53:07.093544 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d6f8e00_6d9e_4d52_a689_d1edf8037c57.slice/crio-0578f20793d19e42716f6741a432bdead1a7c9a59a6bd161e054092efbfd9044 WatchSource:0}: Error finding container 0578f20793d19e42716f6741a432bdead1a7c9a59a6bd161e054092efbfd9044: Status 404 returned error can't find the container with id 0578f20793d19e42716f6741a432bdead1a7c9a59a6bd161e054092efbfd9044 Nov 21 14:53:07 crc kubenswrapper[4675]: I1121 14:53:07.101504 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bbqzj"] Nov 21 14:53:08 crc kubenswrapper[4675]: I1121 14:53:08.034633 4675 generic.go:334] "Generic (PLEG): container finished" podID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerID="8816469ec28a12357a53e7b9c54120db05b477fa5789c932df48e144ccbbb5e5" exitCode=0 Nov 21 14:53:08 crc kubenswrapper[4675]: I1121 14:53:08.034868 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbqzj" event={"ID":"6d6f8e00-6d9e-4d52-a689-d1edf8037c57","Type":"ContainerDied","Data":"8816469ec28a12357a53e7b9c54120db05b477fa5789c932df48e144ccbbb5e5"} Nov 21 14:53:08 crc kubenswrapper[4675]: I1121 14:53:08.035500 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbqzj" event={"ID":"6d6f8e00-6d9e-4d52-a689-d1edf8037c57","Type":"ContainerStarted","Data":"0578f20793d19e42716f6741a432bdead1a7c9a59a6bd161e054092efbfd9044"} Nov 21 14:53:16 crc kubenswrapper[4675]: I1121 14:53:16.136580 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:53:16 crc kubenswrapper[4675]: I1121 14:53:16.137424 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:53:19 crc kubenswrapper[4675]: I1121 14:53:19.797418 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="2c8951ed-3fad-45f7-ab94-b1843d1c4114" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.202:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 14:53:23 crc kubenswrapper[4675]: I1121 14:53:23.280963 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbqzj" event={"ID":"6d6f8e00-6d9e-4d52-a689-d1edf8037c57","Type":"ContainerStarted","Data":"092ba328510cd394302f9a70840d6798b62da00c1bf855783e896a683e0cda5e"} Nov 21 14:53:25 crc kubenswrapper[4675]: I1121 14:53:25.310690 4675 generic.go:334] "Generic (PLEG): container finished" podID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerID="aa57635ba80db781862da30ed976eb31762969d4580934aba3901c87e0084814" exitCode=0 Nov 21 14:53:25 crc kubenswrapper[4675]: I1121 14:53:25.311429 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8bx5" event={"ID":"1de24bae-0f29-4702-9b6f-e12dab9d73fc","Type":"ContainerDied","Data":"aa57635ba80db781862da30ed976eb31762969d4580934aba3901c87e0084814"} Nov 21 14:53:28 crc kubenswrapper[4675]: I1121 14:53:28.854348 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="2c8951ed-3fad-45f7-ab94-b1843d1c4114" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.202:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 14:53:33 crc kubenswrapper[4675]: I1121 14:53:33.896294 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="2c8951ed-3fad-45f7-ab94-b1843d1c4114" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.202:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 14:53:33 crc kubenswrapper[4675]: I1121 14:53:33.908607 4675 trace.go:236] Trace[1894013221]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (21-Nov-2025 14:53:32.619) (total time: 1283ms): Nov 21 14:53:33 crc kubenswrapper[4675]: Trace[1894013221]: [1.283100255s] [1.283100255s] END Nov 21 14:53:38 crc kubenswrapper[4675]: I1121 14:53:38.953354 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="2c8951ed-3fad-45f7-ab94-b1843d1c4114" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.202:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 14:53:38 crc kubenswrapper[4675]: I1121 14:53:38.954105 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 21 14:53:38 crc kubenswrapper[4675]: I1121 14:53:38.955315 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"e3950b21c337288568330303393b0f3546bf49a0f3bdc5b1d0ed826717c42ad4"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Nov 21 14:53:38 crc kubenswrapper[4675]: I1121 14:53:38.955409 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2c8951ed-3fad-45f7-ab94-b1843d1c4114" containerName="cinder-scheduler" containerID="cri-o://e3950b21c337288568330303393b0f3546bf49a0f3bdc5b1d0ed826717c42ad4" gracePeriod=30 Nov 21 14:53:40 crc kubenswrapper[4675]: I1121 14:53:40.529604 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8bx5" event={"ID":"1de24bae-0f29-4702-9b6f-e12dab9d73fc","Type":"ContainerStarted","Data":"03f2034863ed64c48b3d3706971adac16e8f10322220f45acb2223b7e63fb484"} Nov 21 14:53:40 crc kubenswrapper[4675]: I1121 14:53:40.554429 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m8bx5" podStartSLOduration=10.17776261 podStartE2EDuration="46.554407475s" podCreationTimestamp="2025-11-21 14:52:54 +0000 UTC" firstStartedPulling="2025-11-21 14:53:02.980354239 +0000 UTC m=+4859.706768966" lastFinishedPulling="2025-11-21 14:53:39.356999114 +0000 UTC m=+4896.083413831" observedRunningTime="2025-11-21 14:53:40.547791308 +0000 UTC m=+4897.274206035" watchObservedRunningTime="2025-11-21 14:53:40.554407475 +0000 UTC m=+4897.280822202" Nov 21 14:53:42 crc kubenswrapper[4675]: I1121 14:53:42.552683 4675 generic.go:334] "Generic (PLEG): container finished" podID="2c8951ed-3fad-45f7-ab94-b1843d1c4114" containerID="e3950b21c337288568330303393b0f3546bf49a0f3bdc5b1d0ed826717c42ad4" exitCode=0 Nov 21 14:53:42 crc kubenswrapper[4675]: I1121 14:53:42.552770 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c8951ed-3fad-45f7-ab94-b1843d1c4114","Type":"ContainerDied","Data":"e3950b21c337288568330303393b0f3546bf49a0f3bdc5b1d0ed826717c42ad4"} Nov 21 14:53:45 crc kubenswrapper[4675]: I1121 14:53:45.037895 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:53:45 crc kubenswrapper[4675]: I1121 14:53:45.038554 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:53:46 crc kubenswrapper[4675]: I1121 14:53:46.094462 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-m8bx5" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="registry-server" probeResult="failure" output=< Nov 21 14:53:46 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:53:46 crc kubenswrapper[4675]: > Nov 21 14:53:46 crc kubenswrapper[4675]: I1121 14:53:46.136919 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:53:46 crc kubenswrapper[4675]: I1121 14:53:46.137019 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:53:46 crc kubenswrapper[4675]: I1121 14:53:46.137141 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 14:53:46 crc kubenswrapper[4675]: I1121 14:53:46.139247 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2634b22282a4bd918df36e3c8783b1ea95bfdfcff63b623fb73e04f873be6fb3"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:53:46 crc kubenswrapper[4675]: I1121 14:53:46.139359 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://2634b22282a4bd918df36e3c8783b1ea95bfdfcff63b623fb73e04f873be6fb3" gracePeriod=600 Nov 21 14:53:46 crc kubenswrapper[4675]: I1121 14:53:46.872380 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="2634b22282a4bd918df36e3c8783b1ea95bfdfcff63b623fb73e04f873be6fb3" exitCode=0 Nov 21 14:53:46 crc kubenswrapper[4675]: I1121 14:53:46.872485 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"2634b22282a4bd918df36e3c8783b1ea95bfdfcff63b623fb73e04f873be6fb3"} Nov 21 14:53:46 crc kubenswrapper[4675]: I1121 14:53:46.875667 4675 scope.go:117] "RemoveContainer" containerID="a3bf9e01d084ea49b6dbccfea06c30363e542c81bb1caf9ec3a4067d84b988e8" Nov 21 14:53:48 crc kubenswrapper[4675]: I1121 14:53:48.904368 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a"} Nov 21 14:53:53 crc kubenswrapper[4675]: I1121 14:53:53.151425 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-zm72j" podUID="3f6b3f8e-0776-47f2-bbe4-ed0d6af49813" containerName="registry-server" probeResult="failure" output=< Nov 21 14:53:53 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:53:53 crc kubenswrapper[4675]: > Nov 21 14:53:53 crc kubenswrapper[4675]: I1121 14:53:53.151729 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-zm72j" podUID="3f6b3f8e-0776-47f2-bbe4-ed0d6af49813" containerName="registry-server" probeResult="failure" output=< Nov 21 14:53:53 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:53:53 crc kubenswrapper[4675]: > Nov 21 14:53:56 crc kubenswrapper[4675]: I1121 14:53:56.106979 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-m8bx5" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="registry-server" probeResult="failure" output=< Nov 21 14:53:56 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:53:56 crc kubenswrapper[4675]: > Nov 21 14:53:56 crc kubenswrapper[4675]: I1121 14:53:56.993290 4675 generic.go:334] "Generic (PLEG): container finished" podID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerID="092ba328510cd394302f9a70840d6798b62da00c1bf855783e896a683e0cda5e" exitCode=0 Nov 21 14:53:56 crc kubenswrapper[4675]: I1121 14:53:56.993361 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbqzj" event={"ID":"6d6f8e00-6d9e-4d52-a689-d1edf8037c57","Type":"ContainerDied","Data":"092ba328510cd394302f9a70840d6798b62da00c1bf855783e896a683e0cda5e"} Nov 21 14:53:56 crc kubenswrapper[4675]: I1121 14:53:56.997089 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c8951ed-3fad-45f7-ab94-b1843d1c4114","Type":"ContainerStarted","Data":"ddd551f225a31214c410ee44b51af1d483d2e8c6f708c9028d4cd40f5cb7a06b"} Nov 21 14:53:57 crc kubenswrapper[4675]: I1121 14:53:57.007313 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:53:58 crc kubenswrapper[4675]: I1121 14:53:58.688822 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2vs27"] Nov 21 14:53:58 crc kubenswrapper[4675]: I1121 14:53:58.692511 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:53:58 crc kubenswrapper[4675]: I1121 14:53:58.805571 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8r4n\" (UniqueName: \"kubernetes.io/projected/9e4584c4-c607-41be-8c73-b25645fa1764-kube-api-access-h8r4n\") pod \"redhat-marketplace-2vs27\" (UID: \"9e4584c4-c607-41be-8c73-b25645fa1764\") " pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:53:58 crc kubenswrapper[4675]: I1121 14:53:58.805646 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4584c4-c607-41be-8c73-b25645fa1764-catalog-content\") pod \"redhat-marketplace-2vs27\" (UID: \"9e4584c4-c607-41be-8c73-b25645fa1764\") " pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:53:58 crc kubenswrapper[4675]: I1121 14:53:58.806015 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4584c4-c607-41be-8c73-b25645fa1764-utilities\") pod \"redhat-marketplace-2vs27\" (UID: \"9e4584c4-c607-41be-8c73-b25645fa1764\") " pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:53:58 crc kubenswrapper[4675]: I1121 14:53:58.909081 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8r4n\" (UniqueName: \"kubernetes.io/projected/9e4584c4-c607-41be-8c73-b25645fa1764-kube-api-access-h8r4n\") pod \"redhat-marketplace-2vs27\" (UID: \"9e4584c4-c607-41be-8c73-b25645fa1764\") " pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:53:58 crc kubenswrapper[4675]: I1121 14:53:58.909164 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4584c4-c607-41be-8c73-b25645fa1764-catalog-content\") pod \"redhat-marketplace-2vs27\" (UID: \"9e4584c4-c607-41be-8c73-b25645fa1764\") " pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:53:58 crc kubenswrapper[4675]: I1121 14:53:58.909337 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4584c4-c607-41be-8c73-b25645fa1764-utilities\") pod \"redhat-marketplace-2vs27\" (UID: \"9e4584c4-c607-41be-8c73-b25645fa1764\") " pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:53:59 crc kubenswrapper[4675]: I1121 14:53:59.075974 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vs27"] Nov 21 14:53:59 crc kubenswrapper[4675]: I1121 14:53:59.342372 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4584c4-c607-41be-8c73-b25645fa1764-catalog-content\") pod \"redhat-marketplace-2vs27\" (UID: \"9e4584c4-c607-41be-8c73-b25645fa1764\") " pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:53:59 crc kubenswrapper[4675]: I1121 14:53:59.384142 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4584c4-c607-41be-8c73-b25645fa1764-utilities\") pod \"redhat-marketplace-2vs27\" (UID: \"9e4584c4-c607-41be-8c73-b25645fa1764\") " pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:53:59 crc kubenswrapper[4675]: I1121 14:53:59.811441 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8r4n\" (UniqueName: \"kubernetes.io/projected/9e4584c4-c607-41be-8c73-b25645fa1764-kube-api-access-h8r4n\") pod \"redhat-marketplace-2vs27\" (UID: \"9e4584c4-c607-41be-8c73-b25645fa1764\") " pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:53:59 crc kubenswrapper[4675]: I1121 14:53:59.880826 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:54:00 crc kubenswrapper[4675]: I1121 14:54:00.736721 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 21 14:54:00 crc kubenswrapper[4675]: I1121 14:54:00.763186 4675 trace.go:236] Trace[348766960]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/community-operators-pwjwv" (21-Nov-2025 14:53:58.196) (total time: 2566ms): Nov 21 14:54:00 crc kubenswrapper[4675]: Trace[348766960]: [2.56694638s] [2.56694638s] END Nov 21 14:54:01 crc kubenswrapper[4675]: I1121 14:54:01.676287 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vs27"] Nov 21 14:54:02 crc kubenswrapper[4675]: I1121 14:54:02.056999 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vs27" event={"ID":"9e4584c4-c607-41be-8c73-b25645fa1764","Type":"ContainerStarted","Data":"abd1ed6b60661e343ce800e3b68b95968e186417b451b7f3458ba0a22c0b2445"} Nov 21 14:54:02 crc kubenswrapper[4675]: I1121 14:54:02.059826 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbqzj" event={"ID":"6d6f8e00-6d9e-4d52-a689-d1edf8037c57","Type":"ContainerStarted","Data":"1b2e96b8ae737761a4c681d8dff28707d75ab1b50c1712ff7ffd2461b8d307e8"} Nov 21 14:54:02 crc kubenswrapper[4675]: I1121 14:54:02.079114 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bbqzj" podStartSLOduration=4.149450416 podStartE2EDuration="57.079090179s" podCreationTimestamp="2025-11-21 14:53:05 +0000 UTC" firstStartedPulling="2025-11-21 14:53:08.036914777 +0000 UTC m=+4864.763329504" lastFinishedPulling="2025-11-21 14:54:00.96655454 +0000 UTC m=+4917.692969267" observedRunningTime="2025-11-21 14:54:02.078419522 +0000 UTC m=+4918.804834269" watchObservedRunningTime="2025-11-21 14:54:02.079090179 +0000 UTC m=+4918.805504916" Nov 21 14:54:03 crc kubenswrapper[4675]: I1121 14:54:03.073728 4675 generic.go:334] "Generic (PLEG): container finished" podID="9e4584c4-c607-41be-8c73-b25645fa1764" containerID="9ae956b0f11f4f919c17deec52fed74e32713d9e74841d43753a88af9d02cf20" exitCode=0 Nov 21 14:54:03 crc kubenswrapper[4675]: I1121 14:54:03.073765 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vs27" event={"ID":"9e4584c4-c607-41be-8c73-b25645fa1764","Type":"ContainerDied","Data":"9ae956b0f11f4f919c17deec52fed74e32713d9e74841d43753a88af9d02cf20"} Nov 21 14:54:05 crc kubenswrapper[4675]: I1121 14:54:05.753436 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 21 14:54:06 crc kubenswrapper[4675]: I1121 14:54:06.085513 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-m8bx5" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:06 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:06 crc kubenswrapper[4675]: > Nov 21 14:54:06 crc kubenswrapper[4675]: I1121 14:54:06.328990 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:54:06 crc kubenswrapper[4675]: I1121 14:54:06.329458 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:54:07 crc kubenswrapper[4675]: I1121 14:54:07.149145 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vs27" event={"ID":"9e4584c4-c607-41be-8c73-b25645fa1764","Type":"ContainerStarted","Data":"9d9ef87bb3fe1b4eb4e6e2b37848de2d6ced188947eee847d9e9b3b92372cd4b"} Nov 21 14:54:07 crc kubenswrapper[4675]: I1121 14:54:07.383566 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:07 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:07 crc kubenswrapper[4675]: > Nov 21 14:54:10 crc kubenswrapper[4675]: I1121 14:54:10.185666 4675 generic.go:334] "Generic (PLEG): container finished" podID="9e4584c4-c607-41be-8c73-b25645fa1764" containerID="9d9ef87bb3fe1b4eb4e6e2b37848de2d6ced188947eee847d9e9b3b92372cd4b" exitCode=0 Nov 21 14:54:10 crc kubenswrapper[4675]: I1121 14:54:10.185752 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vs27" event={"ID":"9e4584c4-c607-41be-8c73-b25645fa1764","Type":"ContainerDied","Data":"9d9ef87bb3fe1b4eb4e6e2b37848de2d6ced188947eee847d9e9b3b92372cd4b"} Nov 21 14:54:16 crc kubenswrapper[4675]: I1121 14:54:16.090054 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-m8bx5" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:16 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:16 crc kubenswrapper[4675]: > Nov 21 14:54:17 crc kubenswrapper[4675]: I1121 14:54:17.392046 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:17 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:17 crc kubenswrapper[4675]: > Nov 21 14:54:18 crc kubenswrapper[4675]: I1121 14:54:18.312450 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vs27" event={"ID":"9e4584c4-c607-41be-8c73-b25645fa1764","Type":"ContainerStarted","Data":"c4f72e9e0433ec461974e34ac7f3cbcb107ab883e44ee2f82137fad290dc788a"} Nov 21 14:54:18 crc kubenswrapper[4675]: I1121 14:54:18.338386 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2vs27" podStartSLOduration=6.035583742 podStartE2EDuration="20.338366642s" podCreationTimestamp="2025-11-21 14:53:58 +0000 UTC" firstStartedPulling="2025-11-21 14:54:03.079252591 +0000 UTC m=+4919.805667318" lastFinishedPulling="2025-11-21 14:54:17.382035491 +0000 UTC m=+4934.108450218" observedRunningTime="2025-11-21 14:54:18.332448034 +0000 UTC m=+4935.058862771" watchObservedRunningTime="2025-11-21 14:54:18.338366642 +0000 UTC m=+4935.064781379" Nov 21 14:54:19 crc kubenswrapper[4675]: I1121 14:54:19.881968 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:54:19 crc kubenswrapper[4675]: I1121 14:54:19.882417 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:54:20 crc kubenswrapper[4675]: I1121 14:54:20.933168 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2vs27" podUID="9e4584c4-c607-41be-8c73-b25645fa1764" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:20 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:20 crc kubenswrapper[4675]: > Nov 21 14:54:26 crc kubenswrapper[4675]: I1121 14:54:26.081749 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-m8bx5" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:26 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:26 crc kubenswrapper[4675]: > Nov 21 14:54:27 crc kubenswrapper[4675]: I1121 14:54:27.701478 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:27 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:27 crc kubenswrapper[4675]: > Nov 21 14:54:30 crc kubenswrapper[4675]: I1121 14:54:30.941660 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2vs27" podUID="9e4584c4-c607-41be-8c73-b25645fa1764" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:30 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:30 crc kubenswrapper[4675]: > Nov 21 14:54:31 crc kubenswrapper[4675]: I1121 14:54:31.574556 4675 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.426941191s: [/var/lib/containers/storage/overlay/c74c231630ece1bdb5dd9f869623574851e6134fecc0740656070d24d0327f90/diff /var/log/pods/openstack_openstackclient_3b6ec2a5-ea89-459f-b66c-4822e68f1498/openstackclient/0.log]; will not log again for this container unless duration exceeds 2s Nov 21 14:54:32 crc kubenswrapper[4675]: I1121 14:54:32.562152 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-pk6br" podUID="1d54f5b5-d5db-4104-9f8b-072086f8f9a4" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:32 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:32 crc kubenswrapper[4675]: > Nov 21 14:54:32 crc kubenswrapper[4675]: I1121 14:54:32.564658 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-pk6br" podUID="1d54f5b5-d5db-4104-9f8b-072086f8f9a4" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:32 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:32 crc kubenswrapper[4675]: > Nov 21 14:54:36 crc kubenswrapper[4675]: I1121 14:54:36.092194 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-m8bx5" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:36 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:36 crc kubenswrapper[4675]: > Nov 21 14:54:37 crc kubenswrapper[4675]: I1121 14:54:37.376729 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:37 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:37 crc kubenswrapper[4675]: > Nov 21 14:54:41 crc kubenswrapper[4675]: I1121 14:54:41.430478 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2vs27" podUID="9e4584c4-c607-41be-8c73-b25645fa1764" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:41 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:41 crc kubenswrapper[4675]: > Nov 21 14:54:46 crc kubenswrapper[4675]: I1121 14:54:46.082466 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-m8bx5" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:46 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:46 crc kubenswrapper[4675]: > Nov 21 14:54:47 crc kubenswrapper[4675]: I1121 14:54:47.378628 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:47 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:47 crc kubenswrapper[4675]: > Nov 21 14:54:49 crc kubenswrapper[4675]: I1121 14:54:49.965794 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:54:50 crc kubenswrapper[4675]: I1121 14:54:50.023159 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:54:50 crc kubenswrapper[4675]: I1121 14:54:50.212099 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vs27"] Nov 21 14:54:51 crc kubenswrapper[4675]: I1121 14:54:51.699365 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2vs27" podUID="9e4584c4-c607-41be-8c73-b25645fa1764" containerName="registry-server" containerID="cri-o://c4f72e9e0433ec461974e34ac7f3cbcb107ab883e44ee2f82137fad290dc788a" gracePeriod=2 Nov 21 14:54:52 crc kubenswrapper[4675]: I1121 14:54:52.975047 4675 generic.go:334] "Generic (PLEG): container finished" podID="9e4584c4-c607-41be-8c73-b25645fa1764" containerID="c4f72e9e0433ec461974e34ac7f3cbcb107ab883e44ee2f82137fad290dc788a" exitCode=0 Nov 21 14:54:52 crc kubenswrapper[4675]: I1121 14:54:52.975143 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vs27" event={"ID":"9e4584c4-c607-41be-8c73-b25645fa1764","Type":"ContainerDied","Data":"c4f72e9e0433ec461974e34ac7f3cbcb107ab883e44ee2f82137fad290dc788a"} Nov 21 14:54:55 crc kubenswrapper[4675]: I1121 14:54:55.117444 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:54:55 crc kubenswrapper[4675]: I1121 14:54:55.173162 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:54:55 crc kubenswrapper[4675]: I1121 14:54:55.741286 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8bx5"] Nov 21 14:54:57 crc kubenswrapper[4675]: I1121 14:54:57.043704 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m8bx5" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="registry-server" containerID="cri-o://03f2034863ed64c48b3d3706971adac16e8f10322220f45acb2223b7e63fb484" gracePeriod=2 Nov 21 14:54:57 crc kubenswrapper[4675]: I1121 14:54:57.044262 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vs27" event={"ID":"9e4584c4-c607-41be-8c73-b25645fa1764","Type":"ContainerDied","Data":"abd1ed6b60661e343ce800e3b68b95968e186417b451b7f3458ba0a22c0b2445"} Nov 21 14:54:57 crc kubenswrapper[4675]: I1121 14:54:57.087999 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abd1ed6b60661e343ce800e3b68b95968e186417b451b7f3458ba0a22c0b2445" Nov 21 14:54:57 crc kubenswrapper[4675]: I1121 14:54:57.092758 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:54:57 crc kubenswrapper[4675]: I1121 14:54:57.254825 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8r4n\" (UniqueName: \"kubernetes.io/projected/9e4584c4-c607-41be-8c73-b25645fa1764-kube-api-access-h8r4n\") pod \"9e4584c4-c607-41be-8c73-b25645fa1764\" (UID: \"9e4584c4-c607-41be-8c73-b25645fa1764\") " Nov 21 14:54:57 crc kubenswrapper[4675]: I1121 14:54:57.255138 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4584c4-c607-41be-8c73-b25645fa1764-utilities\") pod \"9e4584c4-c607-41be-8c73-b25645fa1764\" (UID: \"9e4584c4-c607-41be-8c73-b25645fa1764\") " Nov 21 14:54:57 crc kubenswrapper[4675]: I1121 14:54:57.255260 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4584c4-c607-41be-8c73-b25645fa1764-catalog-content\") pod \"9e4584c4-c607-41be-8c73-b25645fa1764\" (UID: \"9e4584c4-c607-41be-8c73-b25645fa1764\") " Nov 21 14:54:57 crc kubenswrapper[4675]: I1121 14:54:57.261883 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e4584c4-c607-41be-8c73-b25645fa1764-utilities" (OuterVolumeSpecName: "utilities") pod "9e4584c4-c607-41be-8c73-b25645fa1764" (UID: "9e4584c4-c607-41be-8c73-b25645fa1764"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:54:57 crc kubenswrapper[4675]: I1121 14:54:57.308992 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e4584c4-c607-41be-8c73-b25645fa1764-kube-api-access-h8r4n" (OuterVolumeSpecName: "kube-api-access-h8r4n") pod "9e4584c4-c607-41be-8c73-b25645fa1764" (UID: "9e4584c4-c607-41be-8c73-b25645fa1764"). InnerVolumeSpecName "kube-api-access-h8r4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:54:57 crc kubenswrapper[4675]: I1121 14:54:57.357890 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e4584c4-c607-41be-8c73-b25645fa1764-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:54:57 crc kubenswrapper[4675]: I1121 14:54:57.357932 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8r4n\" (UniqueName: \"kubernetes.io/projected/9e4584c4-c607-41be-8c73-b25645fa1764-kube-api-access-h8r4n\") on node \"crc\" DevicePath \"\"" Nov 21 14:54:57 crc kubenswrapper[4675]: I1121 14:54:57.392379 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:54:57 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:54:57 crc kubenswrapper[4675]: > Nov 21 14:54:57 crc kubenswrapper[4675]: I1121 14:54:57.560146 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e4584c4-c607-41be-8c73-b25645fa1764-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e4584c4-c607-41be-8c73-b25645fa1764" (UID: "9e4584c4-c607-41be-8c73-b25645fa1764"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:54:57 crc kubenswrapper[4675]: I1121 14:54:57.562622 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e4584c4-c607-41be-8c73-b25645fa1764-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:54:58 crc kubenswrapper[4675]: I1121 14:54:58.071221 4675 generic.go:334] "Generic (PLEG): container finished" podID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerID="03f2034863ed64c48b3d3706971adac16e8f10322220f45acb2223b7e63fb484" exitCode=0 Nov 21 14:54:58 crc kubenswrapper[4675]: I1121 14:54:58.071570 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vs27" Nov 21 14:54:58 crc kubenswrapper[4675]: I1121 14:54:58.077231 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8bx5" event={"ID":"1de24bae-0f29-4702-9b6f-e12dab9d73fc","Type":"ContainerDied","Data":"03f2034863ed64c48b3d3706971adac16e8f10322220f45acb2223b7e63fb484"} Nov 21 14:54:58 crc kubenswrapper[4675]: I1121 14:54:58.118865 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vs27"] Nov 21 14:54:58 crc kubenswrapper[4675]: I1121 14:54:58.129220 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vs27"] Nov 21 14:54:58 crc kubenswrapper[4675]: E1121 14:54:58.344617 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e4584c4_c607_41be_8c73_b25645fa1764.slice/crio-abd1ed6b60661e343ce800e3b68b95968e186417b451b7f3458ba0a22c0b2445\": RecentStats: unable to find data in memory cache]" Nov 21 14:54:58 crc kubenswrapper[4675]: I1121 14:54:58.880304 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e4584c4-c607-41be-8c73-b25645fa1764" path="/var/lib/kubelet/pods/9e4584c4-c607-41be-8c73-b25645fa1764/volumes" Nov 21 14:54:59 crc kubenswrapper[4675]: I1121 14:54:59.115425 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8bx5" event={"ID":"1de24bae-0f29-4702-9b6f-e12dab9d73fc","Type":"ContainerDied","Data":"eb85a036788a0031c29da50bc53e83fe8a25b5d51452a523a7b47d6f906742b8"} Nov 21 14:54:59 crc kubenswrapper[4675]: I1121 14:54:59.115743 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb85a036788a0031c29da50bc53e83fe8a25b5d51452a523a7b47d6f906742b8" Nov 21 14:54:59 crc kubenswrapper[4675]: I1121 14:54:59.196232 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:54:59 crc kubenswrapper[4675]: I1121 14:54:59.205335 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1de24bae-0f29-4702-9b6f-e12dab9d73fc-utilities\") pod \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\" (UID: \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\") " Nov 21 14:54:59 crc kubenswrapper[4675]: I1121 14:54:59.205425 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1de24bae-0f29-4702-9b6f-e12dab9d73fc-catalog-content\") pod \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\" (UID: \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\") " Nov 21 14:54:59 crc kubenswrapper[4675]: I1121 14:54:59.205478 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v9kx\" (UniqueName: \"kubernetes.io/projected/1de24bae-0f29-4702-9b6f-e12dab9d73fc-kube-api-access-4v9kx\") pod \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\" (UID: \"1de24bae-0f29-4702-9b6f-e12dab9d73fc\") " Nov 21 14:54:59 crc kubenswrapper[4675]: I1121 14:54:59.205977 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1de24bae-0f29-4702-9b6f-e12dab9d73fc-utilities" (OuterVolumeSpecName: "utilities") pod "1de24bae-0f29-4702-9b6f-e12dab9d73fc" (UID: "1de24bae-0f29-4702-9b6f-e12dab9d73fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:54:59 crc kubenswrapper[4675]: I1121 14:54:59.213615 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de24bae-0f29-4702-9b6f-e12dab9d73fc-kube-api-access-4v9kx" (OuterVolumeSpecName: "kube-api-access-4v9kx") pod "1de24bae-0f29-4702-9b6f-e12dab9d73fc" (UID: "1de24bae-0f29-4702-9b6f-e12dab9d73fc"). InnerVolumeSpecName "kube-api-access-4v9kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:54:59 crc kubenswrapper[4675]: I1121 14:54:59.308498 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1de24bae-0f29-4702-9b6f-e12dab9d73fc-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:54:59 crc kubenswrapper[4675]: I1121 14:54:59.308525 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v9kx\" (UniqueName: \"kubernetes.io/projected/1de24bae-0f29-4702-9b6f-e12dab9d73fc-kube-api-access-4v9kx\") on node \"crc\" DevicePath \"\"" Nov 21 14:54:59 crc kubenswrapper[4675]: I1121 14:54:59.556822 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1de24bae-0f29-4702-9b6f-e12dab9d73fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1de24bae-0f29-4702-9b6f-e12dab9d73fc" (UID: "1de24bae-0f29-4702-9b6f-e12dab9d73fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:54:59 crc kubenswrapper[4675]: I1121 14:54:59.616992 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1de24bae-0f29-4702-9b6f-e12dab9d73fc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:55:00 crc kubenswrapper[4675]: I1121 14:55:00.125992 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8bx5" Nov 21 14:55:00 crc kubenswrapper[4675]: I1121 14:55:00.164261 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8bx5"] Nov 21 14:55:00 crc kubenswrapper[4675]: I1121 14:55:00.174872 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m8bx5"] Nov 21 14:55:00 crc kubenswrapper[4675]: I1121 14:55:00.860801 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" path="/var/lib/kubelet/pods/1de24bae-0f29-4702-9b6f-e12dab9d73fc/volumes" Nov 21 14:55:07 crc kubenswrapper[4675]: I1121 14:55:07.372907 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:55:07 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:55:07 crc kubenswrapper[4675]: > Nov 21 14:55:17 crc kubenswrapper[4675]: I1121 14:55:17.388057 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:55:17 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:55:17 crc kubenswrapper[4675]: > Nov 21 14:55:27 crc kubenswrapper[4675]: I1121 14:55:27.375338 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:55:27 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:55:27 crc kubenswrapper[4675]: > Nov 21 14:55:37 crc kubenswrapper[4675]: I1121 14:55:37.516844 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:55:37 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:55:37 crc kubenswrapper[4675]: > Nov 21 14:55:37 crc kubenswrapper[4675]: I1121 14:55:37.517411 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:55:37 crc kubenswrapper[4675]: I1121 14:55:37.518083 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"1b2e96b8ae737761a4c681d8dff28707d75ab1b50c1712ff7ffd2461b8d307e8"} pod="openshift-marketplace/redhat-operators-bbqzj" containerMessage="Container registry-server failed startup probe, will be restarted" Nov 21 14:55:37 crc kubenswrapper[4675]: I1121 14:55:37.518188 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" containerID="cri-o://1b2e96b8ae737761a4c681d8dff28707d75ab1b50c1712ff7ffd2461b8d307e8" gracePeriod=30 Nov 21 14:56:07 crc kubenswrapper[4675]: I1121 14:56:07.926536 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bbqzj_6d6f8e00-6d9e-4d52-a689-d1edf8037c57/registry-server/0.log" Nov 21 14:56:07 crc kubenswrapper[4675]: I1121 14:56:07.949664 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbqzj" event={"ID":"6d6f8e00-6d9e-4d52-a689-d1edf8037c57","Type":"ContainerDied","Data":"1b2e96b8ae737761a4c681d8dff28707d75ab1b50c1712ff7ffd2461b8d307e8"} Nov 21 14:56:07 crc kubenswrapper[4675]: I1121 14:56:07.954230 4675 generic.go:334] "Generic (PLEG): container finished" podID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerID="1b2e96b8ae737761a4c681d8dff28707d75ab1b50c1712ff7ffd2461b8d307e8" exitCode=137 Nov 21 14:56:11 crc kubenswrapper[4675]: I1121 14:56:11.125027 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bbqzj_6d6f8e00-6d9e-4d52-a689-d1edf8037c57/registry-server/0.log" Nov 21 14:56:11 crc kubenswrapper[4675]: I1121 14:56:11.127036 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbqzj" event={"ID":"6d6f8e00-6d9e-4d52-a689-d1edf8037c57","Type":"ContainerStarted","Data":"a5377b17c3853230f9fb09d0c22a1649bf1ac16f9245908502aaa191d61cbe31"} Nov 21 14:56:16 crc kubenswrapper[4675]: I1121 14:56:16.156882 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:56:16 crc kubenswrapper[4675]: I1121 14:56:16.160514 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:56:16 crc kubenswrapper[4675]: I1121 14:56:16.327505 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:56:16 crc kubenswrapper[4675]: I1121 14:56:16.327564 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:56:17 crc kubenswrapper[4675]: I1121 14:56:17.385649 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:56:17 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:56:17 crc kubenswrapper[4675]: > Nov 21 14:56:27 crc kubenswrapper[4675]: I1121 14:56:27.995496 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:56:27 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:56:27 crc kubenswrapper[4675]: > Nov 21 14:56:37 crc kubenswrapper[4675]: I1121 14:56:37.405172 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:56:37 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:56:37 crc kubenswrapper[4675]: > Nov 21 14:56:46 crc kubenswrapper[4675]: I1121 14:56:46.136608 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:56:46 crc kubenswrapper[4675]: I1121 14:56:46.137435 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:56:47 crc kubenswrapper[4675]: I1121 14:56:47.385374 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:56:47 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:56:47 crc kubenswrapper[4675]: > Nov 21 14:56:57 crc kubenswrapper[4675]: I1121 14:56:57.927031 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:56:57 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:56:57 crc kubenswrapper[4675]: > Nov 21 14:57:07 crc kubenswrapper[4675]: I1121 14:57:07.390710 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:57:07 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:57:07 crc kubenswrapper[4675]: > Nov 21 14:57:11 crc kubenswrapper[4675]: I1121 14:57:11.801642 4675 trace.go:236] Trace[703246945]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/community-operators-pwjwv" (21-Nov-2025 14:57:09.791) (total time: 2005ms): Nov 21 14:57:11 crc kubenswrapper[4675]: Trace[703246945]: [2.005035445s] [2.005035445s] END Nov 21 14:57:16 crc kubenswrapper[4675]: I1121 14:57:16.136755 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:57:16 crc kubenswrapper[4675]: I1121 14:57:16.137328 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:57:16 crc kubenswrapper[4675]: I1121 14:57:16.138210 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 14:57:16 crc kubenswrapper[4675]: I1121 14:57:16.144262 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:57:16 crc kubenswrapper[4675]: I1121 14:57:16.144355 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" gracePeriod=600 Nov 21 14:57:17 crc kubenswrapper[4675]: E1121 14:57:17.084937 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:57:17 crc kubenswrapper[4675]: I1121 14:57:17.387910 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:57:17 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:57:17 crc kubenswrapper[4675]: > Nov 21 14:57:17 crc kubenswrapper[4675]: I1121 14:57:17.398363 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a"} Nov 21 14:57:17 crc kubenswrapper[4675]: I1121 14:57:17.398616 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" exitCode=0 Nov 21 14:57:17 crc kubenswrapper[4675]: I1121 14:57:17.411759 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 14:57:17 crc kubenswrapper[4675]: I1121 14:57:17.456123 4675 scope.go:117] "RemoveContainer" containerID="2634b22282a4bd918df36e3c8783b1ea95bfdfcff63b623fb73e04f873be6fb3" Nov 21 14:57:17 crc kubenswrapper[4675]: E1121 14:57:17.457019 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:57:27 crc kubenswrapper[4675]: I1121 14:57:27.378583 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:57:27 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:57:27 crc kubenswrapper[4675]: > Nov 21 14:57:29 crc kubenswrapper[4675]: I1121 14:57:29.849435 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 14:57:29 crc kubenswrapper[4675]: E1121 14:57:29.850338 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:57:37 crc kubenswrapper[4675]: I1121 14:57:37.399520 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:57:37 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:57:37 crc kubenswrapper[4675]: > Nov 21 14:57:44 crc kubenswrapper[4675]: I1121 14:57:44.859795 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 14:57:44 crc kubenswrapper[4675]: E1121 14:57:44.860713 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:57:47 crc kubenswrapper[4675]: I1121 14:57:47.398645 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:57:47 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:57:47 crc kubenswrapper[4675]: > Nov 21 14:57:47 crc kubenswrapper[4675]: I1121 14:57:47.400547 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:57:47 crc kubenswrapper[4675]: I1121 14:57:47.401598 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"a5377b17c3853230f9fb09d0c22a1649bf1ac16f9245908502aaa191d61cbe31"} pod="openshift-marketplace/redhat-operators-bbqzj" containerMessage="Container registry-server failed startup probe, will be restarted" Nov 21 14:57:47 crc kubenswrapper[4675]: I1121 14:57:47.401688 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" containerID="cri-o://a5377b17c3853230f9fb09d0c22a1649bf1ac16f9245908502aaa191d61cbe31" gracePeriod=30 Nov 21 14:57:55 crc kubenswrapper[4675]: I1121 14:57:55.849296 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 14:57:55 crc kubenswrapper[4675]: E1121 14:57:55.850101 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:58:03 crc kubenswrapper[4675]: I1121 14:58:03.947936 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bbqzj_6d6f8e00-6d9e-4d52-a689-d1edf8037c57/registry-server/0.log" Nov 21 14:58:03 crc kubenswrapper[4675]: I1121 14:58:03.949244 4675 generic.go:334] "Generic (PLEG): container finished" podID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerID="a5377b17c3853230f9fb09d0c22a1649bf1ac16f9245908502aaa191d61cbe31" exitCode=0 Nov 21 14:58:03 crc kubenswrapper[4675]: I1121 14:58:03.949279 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbqzj" event={"ID":"6d6f8e00-6d9e-4d52-a689-d1edf8037c57","Type":"ContainerDied","Data":"a5377b17c3853230f9fb09d0c22a1649bf1ac16f9245908502aaa191d61cbe31"} Nov 21 14:58:03 crc kubenswrapper[4675]: I1121 14:58:03.949318 4675 scope.go:117] "RemoveContainer" containerID="1b2e96b8ae737761a4c681d8dff28707d75ab1b50c1712ff7ffd2461b8d307e8" Nov 21 14:58:04 crc kubenswrapper[4675]: I1121 14:58:04.965919 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbqzj" event={"ID":"6d6f8e00-6d9e-4d52-a689-d1edf8037c57","Type":"ContainerStarted","Data":"deb150ba12185cfae3b74633526aec840db0756d19c6c1b2c54cfe458c30b4b2"} Nov 21 14:58:06 crc kubenswrapper[4675]: I1121 14:58:06.326712 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:58:06 crc kubenswrapper[4675]: I1121 14:58:06.327219 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:58:07 crc kubenswrapper[4675]: I1121 14:58:07.403844 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:58:07 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:58:07 crc kubenswrapper[4675]: > Nov 21 14:58:09 crc kubenswrapper[4675]: I1121 14:58:09.850061 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 14:58:09 crc kubenswrapper[4675]: E1121 14:58:09.851007 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:58:17 crc kubenswrapper[4675]: I1121 14:58:17.524445 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:58:17 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:58:17 crc kubenswrapper[4675]: > Nov 21 14:58:22 crc kubenswrapper[4675]: I1121 14:58:22.849775 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 14:58:22 crc kubenswrapper[4675]: E1121 14:58:22.850960 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:58:27 crc kubenswrapper[4675]: I1121 14:58:27.380037 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" probeResult="failure" output=< Nov 21 14:58:27 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 14:58:27 crc kubenswrapper[4675]: > Nov 21 14:58:36 crc kubenswrapper[4675]: I1121 14:58:36.406533 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:58:36 crc kubenswrapper[4675]: I1121 14:58:36.464939 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:58:37 crc kubenswrapper[4675]: I1121 14:58:37.630789 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bbqzj"] Nov 21 14:58:37 crc kubenswrapper[4675]: I1121 14:58:37.849482 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 14:58:37 crc kubenswrapper[4675]: E1121 14:58:37.850218 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:58:38 crc kubenswrapper[4675]: I1121 14:58:38.343710 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bbqzj" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" containerID="cri-o://deb150ba12185cfae3b74633526aec840db0756d19c6c1b2c54cfe458c30b4b2" gracePeriod=2 Nov 21 14:58:39 crc kubenswrapper[4675]: I1121 14:58:39.359621 4675 generic.go:334] "Generic (PLEG): container finished" podID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerID="deb150ba12185cfae3b74633526aec840db0756d19c6c1b2c54cfe458c30b4b2" exitCode=0 Nov 21 14:58:39 crc kubenswrapper[4675]: I1121 14:58:39.359708 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbqzj" event={"ID":"6d6f8e00-6d9e-4d52-a689-d1edf8037c57","Type":"ContainerDied","Data":"deb150ba12185cfae3b74633526aec840db0756d19c6c1b2c54cfe458c30b4b2"} Nov 21 14:58:39 crc kubenswrapper[4675]: I1121 14:58:39.359932 4675 scope.go:117] "RemoveContainer" containerID="a5377b17c3853230f9fb09d0c22a1649bf1ac16f9245908502aaa191d61cbe31" Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.110714 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.207903 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssmd6\" (UniqueName: \"kubernetes.io/projected/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-kube-api-access-ssmd6\") pod \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\" (UID: \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\") " Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.208194 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-catalog-content\") pod \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\" (UID: \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\") " Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.208254 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-utilities\") pod \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\" (UID: \"6d6f8e00-6d9e-4d52-a689-d1edf8037c57\") " Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.210870 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-utilities" (OuterVolumeSpecName: "utilities") pod "6d6f8e00-6d9e-4d52-a689-d1edf8037c57" (UID: "6d6f8e00-6d9e-4d52-a689-d1edf8037c57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.225423 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-kube-api-access-ssmd6" (OuterVolumeSpecName: "kube-api-access-ssmd6") pod "6d6f8e00-6d9e-4d52-a689-d1edf8037c57" (UID: "6d6f8e00-6d9e-4d52-a689-d1edf8037c57"). InnerVolumeSpecName "kube-api-access-ssmd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.311403 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.311441 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssmd6\" (UniqueName: \"kubernetes.io/projected/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-kube-api-access-ssmd6\") on node \"crc\" DevicePath \"\"" Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.334370 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d6f8e00-6d9e-4d52-a689-d1edf8037c57" (UID: "6d6f8e00-6d9e-4d52-a689-d1edf8037c57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.373721 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbqzj" event={"ID":"6d6f8e00-6d9e-4d52-a689-d1edf8037c57","Type":"ContainerDied","Data":"0578f20793d19e42716f6741a432bdead1a7c9a59a6bd161e054092efbfd9044"} Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.373781 4675 scope.go:117] "RemoveContainer" containerID="deb150ba12185cfae3b74633526aec840db0756d19c6c1b2c54cfe458c30b4b2" Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.374010 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbqzj" Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.411517 4675 scope.go:117] "RemoveContainer" containerID="092ba328510cd394302f9a70840d6798b62da00c1bf855783e896a683e0cda5e" Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.413499 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d6f8e00-6d9e-4d52-a689-d1edf8037c57-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.422929 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bbqzj"] Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.435168 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bbqzj"] Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.518257 4675 scope.go:117] "RemoveContainer" containerID="8816469ec28a12357a53e7b9c54120db05b477fa5789c932df48e144ccbbb5e5" Nov 21 14:58:40 crc kubenswrapper[4675]: I1121 14:58:40.865367 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" path="/var/lib/kubelet/pods/6d6f8e00-6d9e-4d52-a689-d1edf8037c57/volumes" Nov 21 14:58:52 crc kubenswrapper[4675]: I1121 14:58:52.850327 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 14:58:52 crc kubenswrapper[4675]: E1121 14:58:52.851234 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:59:07 crc kubenswrapper[4675]: I1121 14:59:07.849203 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 14:59:07 crc kubenswrapper[4675]: E1121 14:59:07.850014 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:59:20 crc kubenswrapper[4675]: I1121 14:59:20.850112 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 14:59:20 crc kubenswrapper[4675]: E1121 14:59:20.852247 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:59:32 crc kubenswrapper[4675]: I1121 14:59:32.175240 4675 scope.go:117] "RemoveContainer" containerID="aa57635ba80db781862da30ed976eb31762969d4580934aba3901c87e0084814" Nov 21 14:59:32 crc kubenswrapper[4675]: I1121 14:59:32.240557 4675 scope.go:117] "RemoveContainer" containerID="36d7943abfa81f2fa01dfe9a33bda3c5e54cc5a98b3946fbe0333159b638f67f" Nov 21 14:59:32 crc kubenswrapper[4675]: I1121 14:59:32.848855 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 14:59:32 crc kubenswrapper[4675]: E1121 14:59:32.849403 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:59:44 crc kubenswrapper[4675]: I1121 14:59:44.858404 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 14:59:44 crc kubenswrapper[4675]: E1121 14:59:44.859456 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.062245 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9xr4t"] Nov 21 14:59:57 crc kubenswrapper[4675]: E1121 14:59:57.067978 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4584c4-c607-41be-8c73-b25645fa1764" containerName="extract-utilities" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.068353 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4584c4-c607-41be-8c73-b25645fa1764" containerName="extract-utilities" Nov 21 14:59:57 crc kubenswrapper[4675]: E1121 14:59:57.068460 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="extract-utilities" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.068537 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="extract-utilities" Nov 21 14:59:57 crc kubenswrapper[4675]: E1121 14:59:57.068630 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="extract-utilities" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.068697 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="extract-utilities" Nov 21 14:59:57 crc kubenswrapper[4675]: E1121 14:59:57.068774 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="extract-content" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.068837 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="extract-content" Nov 21 14:59:57 crc kubenswrapper[4675]: E1121 14:59:57.068926 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="extract-content" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.068996 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="extract-content" Nov 21 14:59:57 crc kubenswrapper[4675]: E1121 14:59:57.069060 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.069154 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: E1121 14:59:57.069228 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.069286 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: E1121 14:59:57.069353 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4584c4-c607-41be-8c73-b25645fa1764" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.069408 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4584c4-c607-41be-8c73-b25645fa1764" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: E1121 14:59:57.069471 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4584c4-c607-41be-8c73-b25645fa1764" containerName="extract-content" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.069529 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4584c4-c607-41be-8c73-b25645fa1764" containerName="extract-content" Nov 21 14:59:57 crc kubenswrapper[4675]: E1121 14:59:57.069590 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.069640 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: E1121 14:59:57.069704 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.069775 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.070272 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.070406 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.070545 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de24bae-0f29-4702-9b6f-e12dab9d73fc" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.070665 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4584c4-c607-41be-8c73-b25645fa1764" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.071250 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d6f8e00-6d9e-4d52-a689-d1edf8037c57" containerName="registry-server" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.076360 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.147678 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9xr4t"] Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.177334 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66bds\" (UniqueName: \"kubernetes.io/projected/ea076f02-208f-417b-9296-825d712fcbc6-kube-api-access-66bds\") pod \"certified-operators-9xr4t\" (UID: \"ea076f02-208f-417b-9296-825d712fcbc6\") " pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.177420 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea076f02-208f-417b-9296-825d712fcbc6-utilities\") pod \"certified-operators-9xr4t\" (UID: \"ea076f02-208f-417b-9296-825d712fcbc6\") " pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.177486 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea076f02-208f-417b-9296-825d712fcbc6-catalog-content\") pod \"certified-operators-9xr4t\" (UID: \"ea076f02-208f-417b-9296-825d712fcbc6\") " pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.279625 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea076f02-208f-417b-9296-825d712fcbc6-catalog-content\") pod \"certified-operators-9xr4t\" (UID: \"ea076f02-208f-417b-9296-825d712fcbc6\") " pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.279881 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66bds\" (UniqueName: \"kubernetes.io/projected/ea076f02-208f-417b-9296-825d712fcbc6-kube-api-access-66bds\") pod \"certified-operators-9xr4t\" (UID: \"ea076f02-208f-417b-9296-825d712fcbc6\") " pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.279988 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea076f02-208f-417b-9296-825d712fcbc6-utilities\") pod \"certified-operators-9xr4t\" (UID: \"ea076f02-208f-417b-9296-825d712fcbc6\") " pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.280199 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea076f02-208f-417b-9296-825d712fcbc6-catalog-content\") pod \"certified-operators-9xr4t\" (UID: \"ea076f02-208f-417b-9296-825d712fcbc6\") " pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.280354 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea076f02-208f-417b-9296-825d712fcbc6-utilities\") pod \"certified-operators-9xr4t\" (UID: \"ea076f02-208f-417b-9296-825d712fcbc6\") " pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.303042 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66bds\" (UniqueName: \"kubernetes.io/projected/ea076f02-208f-417b-9296-825d712fcbc6-kube-api-access-66bds\") pod \"certified-operators-9xr4t\" (UID: \"ea076f02-208f-417b-9296-825d712fcbc6\") " pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.401549 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 14:59:57 crc kubenswrapper[4675]: I1121 14:59:57.978521 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9xr4t"] Nov 21 14:59:57 crc kubenswrapper[4675]: W1121 14:59:57.979989 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea076f02_208f_417b_9296_825d712fcbc6.slice/crio-ccf2b9360603025911d284f71635cec9afa8b6850a3ee58803612c3d3acb6511 WatchSource:0}: Error finding container ccf2b9360603025911d284f71635cec9afa8b6850a3ee58803612c3d3acb6511: Status 404 returned error can't find the container with id ccf2b9360603025911d284f71635cec9afa8b6850a3ee58803612c3d3acb6511 Nov 21 14:59:58 crc kubenswrapper[4675]: I1121 14:59:58.489865 4675 generic.go:334] "Generic (PLEG): container finished" podID="ea076f02-208f-417b-9296-825d712fcbc6" containerID="9a0a4a27092db4ee5a7b774b49371eb5d9ebde194a20b31fe78f928cd981a738" exitCode=0 Nov 21 14:59:58 crc kubenswrapper[4675]: I1121 14:59:58.490058 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xr4t" event={"ID":"ea076f02-208f-417b-9296-825d712fcbc6","Type":"ContainerDied","Data":"9a0a4a27092db4ee5a7b774b49371eb5d9ebde194a20b31fe78f928cd981a738"} Nov 21 14:59:58 crc kubenswrapper[4675]: I1121 14:59:58.490186 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xr4t" event={"ID":"ea076f02-208f-417b-9296-825d712fcbc6","Type":"ContainerStarted","Data":"ccf2b9360603025911d284f71635cec9afa8b6850a3ee58803612c3d3acb6511"} Nov 21 14:59:58 crc kubenswrapper[4675]: I1121 14:59:58.494188 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:59:59 crc kubenswrapper[4675]: I1121 14:59:59.512255 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xr4t" event={"ID":"ea076f02-208f-417b-9296-825d712fcbc6","Type":"ContainerStarted","Data":"fa56fbcb2852c65cf25cf4a40c20fc9103c19b61d086b37ea5e2833dfe004a48"} Nov 21 14:59:59 crc kubenswrapper[4675]: I1121 14:59:59.849152 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 14:59:59 crc kubenswrapper[4675]: E1121 14:59:59.849491 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.193384 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw"] Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.194916 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.201639 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.202264 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.205697 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw"] Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.268027 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b33b49-0c69-4a37-be45-4ccf3dedde32-config-volume\") pod \"collect-profiles-29395620-l5gpw\" (UID: \"83b33b49-0c69-4a37-be45-4ccf3dedde32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.268366 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b33b49-0c69-4a37-be45-4ccf3dedde32-secret-volume\") pod \"collect-profiles-29395620-l5gpw\" (UID: \"83b33b49-0c69-4a37-be45-4ccf3dedde32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.268410 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68g9h\" (UniqueName: \"kubernetes.io/projected/83b33b49-0c69-4a37-be45-4ccf3dedde32-kube-api-access-68g9h\") pod \"collect-profiles-29395620-l5gpw\" (UID: \"83b33b49-0c69-4a37-be45-4ccf3dedde32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.370925 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b33b49-0c69-4a37-be45-4ccf3dedde32-secret-volume\") pod \"collect-profiles-29395620-l5gpw\" (UID: \"83b33b49-0c69-4a37-be45-4ccf3dedde32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.371005 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68g9h\" (UniqueName: \"kubernetes.io/projected/83b33b49-0c69-4a37-be45-4ccf3dedde32-kube-api-access-68g9h\") pod \"collect-profiles-29395620-l5gpw\" (UID: \"83b33b49-0c69-4a37-be45-4ccf3dedde32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.371172 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b33b49-0c69-4a37-be45-4ccf3dedde32-config-volume\") pod \"collect-profiles-29395620-l5gpw\" (UID: \"83b33b49-0c69-4a37-be45-4ccf3dedde32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.372393 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b33b49-0c69-4a37-be45-4ccf3dedde32-config-volume\") pod \"collect-profiles-29395620-l5gpw\" (UID: \"83b33b49-0c69-4a37-be45-4ccf3dedde32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.377762 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b33b49-0c69-4a37-be45-4ccf3dedde32-secret-volume\") pod \"collect-profiles-29395620-l5gpw\" (UID: \"83b33b49-0c69-4a37-be45-4ccf3dedde32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.391462 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68g9h\" (UniqueName: \"kubernetes.io/projected/83b33b49-0c69-4a37-be45-4ccf3dedde32-kube-api-access-68g9h\") pod \"collect-profiles-29395620-l5gpw\" (UID: \"83b33b49-0c69-4a37-be45-4ccf3dedde32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.522335 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" Nov 21 15:00:00 crc kubenswrapper[4675]: I1121 15:00:00.991934 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw"] Nov 21 15:00:01 crc kubenswrapper[4675]: W1121 15:00:01.003873 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83b33b49_0c69_4a37_be45_4ccf3dedde32.slice/crio-fda9c578b5625346cfe2ec261a4942f27db8ee028d1e848a22df19a9ba07e4e9 WatchSource:0}: Error finding container fda9c578b5625346cfe2ec261a4942f27db8ee028d1e848a22df19a9ba07e4e9: Status 404 returned error can't find the container with id fda9c578b5625346cfe2ec261a4942f27db8ee028d1e848a22df19a9ba07e4e9 Nov 21 15:00:01 crc kubenswrapper[4675]: I1121 15:00:01.535604 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" event={"ID":"83b33b49-0c69-4a37-be45-4ccf3dedde32","Type":"ContainerStarted","Data":"0922f9ef9367c765a73cf1ff56a3c0be5ad9d2c07f0f6d7c6b67eb29407c4b7a"} Nov 21 15:00:01 crc kubenswrapper[4675]: I1121 15:00:01.536154 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" event={"ID":"83b33b49-0c69-4a37-be45-4ccf3dedde32","Type":"ContainerStarted","Data":"fda9c578b5625346cfe2ec261a4942f27db8ee028d1e848a22df19a9ba07e4e9"} Nov 21 15:00:01 crc kubenswrapper[4675]: I1121 15:00:01.539566 4675 generic.go:334] "Generic (PLEG): container finished" podID="ea076f02-208f-417b-9296-825d712fcbc6" containerID="fa56fbcb2852c65cf25cf4a40c20fc9103c19b61d086b37ea5e2833dfe004a48" exitCode=0 Nov 21 15:00:01 crc kubenswrapper[4675]: I1121 15:00:01.539610 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xr4t" event={"ID":"ea076f02-208f-417b-9296-825d712fcbc6","Type":"ContainerDied","Data":"fa56fbcb2852c65cf25cf4a40c20fc9103c19b61d086b37ea5e2833dfe004a48"} Nov 21 15:00:01 crc kubenswrapper[4675]: I1121 15:00:01.557333 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" podStartSLOduration=1.5573029059999999 podStartE2EDuration="1.557302906s" podCreationTimestamp="2025-11-21 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:00:01.549113751 +0000 UTC m=+5278.275528478" watchObservedRunningTime="2025-11-21 15:00:01.557302906 +0000 UTC m=+5278.283717633" Nov 21 15:00:02 crc kubenswrapper[4675]: I1121 15:00:02.551333 4675 generic.go:334] "Generic (PLEG): container finished" podID="83b33b49-0c69-4a37-be45-4ccf3dedde32" containerID="0922f9ef9367c765a73cf1ff56a3c0be5ad9d2c07f0f6d7c6b67eb29407c4b7a" exitCode=0 Nov 21 15:00:02 crc kubenswrapper[4675]: I1121 15:00:02.551446 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" event={"ID":"83b33b49-0c69-4a37-be45-4ccf3dedde32","Type":"ContainerDied","Data":"0922f9ef9367c765a73cf1ff56a3c0be5ad9d2c07f0f6d7c6b67eb29407c4b7a"} Nov 21 15:00:02 crc kubenswrapper[4675]: I1121 15:00:02.555724 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xr4t" event={"ID":"ea076f02-208f-417b-9296-825d712fcbc6","Type":"ContainerStarted","Data":"a8c5af9d88cbcb77a44671315ec6c74b051f173efd896193a99048f10e179c8c"} Nov 21 15:00:02 crc kubenswrapper[4675]: I1121 15:00:02.587478 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9xr4t" podStartSLOduration=2.065808302 podStartE2EDuration="5.587461626s" podCreationTimestamp="2025-11-21 14:59:57 +0000 UTC" firstStartedPulling="2025-11-21 14:59:58.493047442 +0000 UTC m=+5275.219462169" lastFinishedPulling="2025-11-21 15:00:02.014700776 +0000 UTC m=+5278.741115493" observedRunningTime="2025-11-21 15:00:02.584633645 +0000 UTC m=+5279.311048402" watchObservedRunningTime="2025-11-21 15:00:02.587461626 +0000 UTC m=+5279.313876353" Nov 21 15:00:04 crc kubenswrapper[4675]: I1121 15:00:04.580156 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" event={"ID":"83b33b49-0c69-4a37-be45-4ccf3dedde32","Type":"ContainerDied","Data":"fda9c578b5625346cfe2ec261a4942f27db8ee028d1e848a22df19a9ba07e4e9"} Nov 21 15:00:04 crc kubenswrapper[4675]: I1121 15:00:04.580664 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fda9c578b5625346cfe2ec261a4942f27db8ee028d1e848a22df19a9ba07e4e9" Nov 21 15:00:04 crc kubenswrapper[4675]: I1121 15:00:04.725058 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" Nov 21 15:00:04 crc kubenswrapper[4675]: I1121 15:00:04.896745 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b33b49-0c69-4a37-be45-4ccf3dedde32-secret-volume\") pod \"83b33b49-0c69-4a37-be45-4ccf3dedde32\" (UID: \"83b33b49-0c69-4a37-be45-4ccf3dedde32\") " Nov 21 15:00:04 crc kubenswrapper[4675]: I1121 15:00:04.897058 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68g9h\" (UniqueName: \"kubernetes.io/projected/83b33b49-0c69-4a37-be45-4ccf3dedde32-kube-api-access-68g9h\") pod \"83b33b49-0c69-4a37-be45-4ccf3dedde32\" (UID: \"83b33b49-0c69-4a37-be45-4ccf3dedde32\") " Nov 21 15:00:04 crc kubenswrapper[4675]: I1121 15:00:04.897257 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b33b49-0c69-4a37-be45-4ccf3dedde32-config-volume\") pod \"83b33b49-0c69-4a37-be45-4ccf3dedde32\" (UID: \"83b33b49-0c69-4a37-be45-4ccf3dedde32\") " Nov 21 15:00:04 crc kubenswrapper[4675]: I1121 15:00:04.897665 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b33b49-0c69-4a37-be45-4ccf3dedde32-config-volume" (OuterVolumeSpecName: "config-volume") pod "83b33b49-0c69-4a37-be45-4ccf3dedde32" (UID: "83b33b49-0c69-4a37-be45-4ccf3dedde32"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:00:04 crc kubenswrapper[4675]: I1121 15:00:04.898440 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b33b49-0c69-4a37-be45-4ccf3dedde32-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:04 crc kubenswrapper[4675]: I1121 15:00:04.903853 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83b33b49-0c69-4a37-be45-4ccf3dedde32-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "83b33b49-0c69-4a37-be45-4ccf3dedde32" (UID: "83b33b49-0c69-4a37-be45-4ccf3dedde32"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:04 crc kubenswrapper[4675]: I1121 15:00:04.904459 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b33b49-0c69-4a37-be45-4ccf3dedde32-kube-api-access-68g9h" (OuterVolumeSpecName: "kube-api-access-68g9h") pod "83b33b49-0c69-4a37-be45-4ccf3dedde32" (UID: "83b33b49-0c69-4a37-be45-4ccf3dedde32"). InnerVolumeSpecName "kube-api-access-68g9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:00:05 crc kubenswrapper[4675]: I1121 15:00:05.001557 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b33b49-0c69-4a37-be45-4ccf3dedde32-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:05 crc kubenswrapper[4675]: I1121 15:00:05.001606 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68g9h\" (UniqueName: \"kubernetes.io/projected/83b33b49-0c69-4a37-be45-4ccf3dedde32-kube-api-access-68g9h\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:05 crc kubenswrapper[4675]: I1121 15:00:05.593436 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-l5gpw" Nov 21 15:00:05 crc kubenswrapper[4675]: I1121 15:00:05.833526 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4"] Nov 21 15:00:05 crc kubenswrapper[4675]: I1121 15:00:05.843570 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395575-wbnw4"] Nov 21 15:00:06 crc kubenswrapper[4675]: I1121 15:00:06.865517 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc38a92-f126-4d9b-9500-3f98029d5cfe" path="/var/lib/kubelet/pods/0bc38a92-f126-4d9b-9500-3f98029d5cfe/volumes" Nov 21 15:00:07 crc kubenswrapper[4675]: I1121 15:00:07.402060 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 15:00:07 crc kubenswrapper[4675]: I1121 15:00:07.403592 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 15:00:07 crc kubenswrapper[4675]: I1121 15:00:07.454474 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 15:00:07 crc kubenswrapper[4675]: I1121 15:00:07.674570 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 15:00:07 crc kubenswrapper[4675]: I1121 15:00:07.733165 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9xr4t"] Nov 21 15:00:09 crc kubenswrapper[4675]: I1121 15:00:09.645357 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9xr4t" podUID="ea076f02-208f-417b-9296-825d712fcbc6" containerName="registry-server" containerID="cri-o://a8c5af9d88cbcb77a44671315ec6c74b051f173efd896193a99048f10e179c8c" gracePeriod=2 Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.324444 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.446060 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea076f02-208f-417b-9296-825d712fcbc6-catalog-content\") pod \"ea076f02-208f-417b-9296-825d712fcbc6\" (UID: \"ea076f02-208f-417b-9296-825d712fcbc6\") " Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.446310 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea076f02-208f-417b-9296-825d712fcbc6-utilities\") pod \"ea076f02-208f-417b-9296-825d712fcbc6\" (UID: \"ea076f02-208f-417b-9296-825d712fcbc6\") " Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.446426 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66bds\" (UniqueName: \"kubernetes.io/projected/ea076f02-208f-417b-9296-825d712fcbc6-kube-api-access-66bds\") pod \"ea076f02-208f-417b-9296-825d712fcbc6\" (UID: \"ea076f02-208f-417b-9296-825d712fcbc6\") " Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.447102 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea076f02-208f-417b-9296-825d712fcbc6-utilities" (OuterVolumeSpecName: "utilities") pod "ea076f02-208f-417b-9296-825d712fcbc6" (UID: "ea076f02-208f-417b-9296-825d712fcbc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.447648 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea076f02-208f-417b-9296-825d712fcbc6-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.452712 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea076f02-208f-417b-9296-825d712fcbc6-kube-api-access-66bds" (OuterVolumeSpecName: "kube-api-access-66bds") pod "ea076f02-208f-417b-9296-825d712fcbc6" (UID: "ea076f02-208f-417b-9296-825d712fcbc6"). InnerVolumeSpecName "kube-api-access-66bds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.521882 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea076f02-208f-417b-9296-825d712fcbc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea076f02-208f-417b-9296-825d712fcbc6" (UID: "ea076f02-208f-417b-9296-825d712fcbc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.549840 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea076f02-208f-417b-9296-825d712fcbc6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.549871 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66bds\" (UniqueName: \"kubernetes.io/projected/ea076f02-208f-417b-9296-825d712fcbc6-kube-api-access-66bds\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.658625 4675 generic.go:334] "Generic (PLEG): container finished" podID="ea076f02-208f-417b-9296-825d712fcbc6" containerID="a8c5af9d88cbcb77a44671315ec6c74b051f173efd896193a99048f10e179c8c" exitCode=0 Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.658667 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xr4t" event={"ID":"ea076f02-208f-417b-9296-825d712fcbc6","Type":"ContainerDied","Data":"a8c5af9d88cbcb77a44671315ec6c74b051f173efd896193a99048f10e179c8c"} Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.658694 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xr4t" event={"ID":"ea076f02-208f-417b-9296-825d712fcbc6","Type":"ContainerDied","Data":"ccf2b9360603025911d284f71635cec9afa8b6850a3ee58803612c3d3acb6511"} Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.658710 4675 scope.go:117] "RemoveContainer" containerID="a8c5af9d88cbcb77a44671315ec6c74b051f173efd896193a99048f10e179c8c" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.658857 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xr4t" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.714226 4675 scope.go:117] "RemoveContainer" containerID="fa56fbcb2852c65cf25cf4a40c20fc9103c19b61d086b37ea5e2833dfe004a48" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.716790 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9xr4t"] Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.730366 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9xr4t"] Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.735507 4675 scope.go:117] "RemoveContainer" containerID="9a0a4a27092db4ee5a7b774b49371eb5d9ebde194a20b31fe78f928cd981a738" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.805658 4675 scope.go:117] "RemoveContainer" containerID="a8c5af9d88cbcb77a44671315ec6c74b051f173efd896193a99048f10e179c8c" Nov 21 15:00:10 crc kubenswrapper[4675]: E1121 15:00:10.806715 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8c5af9d88cbcb77a44671315ec6c74b051f173efd896193a99048f10e179c8c\": container with ID starting with a8c5af9d88cbcb77a44671315ec6c74b051f173efd896193a99048f10e179c8c not found: ID does not exist" containerID="a8c5af9d88cbcb77a44671315ec6c74b051f173efd896193a99048f10e179c8c" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.806763 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8c5af9d88cbcb77a44671315ec6c74b051f173efd896193a99048f10e179c8c"} err="failed to get container status \"a8c5af9d88cbcb77a44671315ec6c74b051f173efd896193a99048f10e179c8c\": rpc error: code = NotFound desc = could not find container \"a8c5af9d88cbcb77a44671315ec6c74b051f173efd896193a99048f10e179c8c\": container with ID starting with a8c5af9d88cbcb77a44671315ec6c74b051f173efd896193a99048f10e179c8c not found: ID does not exist" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.806785 4675 scope.go:117] "RemoveContainer" containerID="fa56fbcb2852c65cf25cf4a40c20fc9103c19b61d086b37ea5e2833dfe004a48" Nov 21 15:00:10 crc kubenswrapper[4675]: E1121 15:00:10.809229 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa56fbcb2852c65cf25cf4a40c20fc9103c19b61d086b37ea5e2833dfe004a48\": container with ID starting with fa56fbcb2852c65cf25cf4a40c20fc9103c19b61d086b37ea5e2833dfe004a48 not found: ID does not exist" containerID="fa56fbcb2852c65cf25cf4a40c20fc9103c19b61d086b37ea5e2833dfe004a48" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.809285 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa56fbcb2852c65cf25cf4a40c20fc9103c19b61d086b37ea5e2833dfe004a48"} err="failed to get container status \"fa56fbcb2852c65cf25cf4a40c20fc9103c19b61d086b37ea5e2833dfe004a48\": rpc error: code = NotFound desc = could not find container \"fa56fbcb2852c65cf25cf4a40c20fc9103c19b61d086b37ea5e2833dfe004a48\": container with ID starting with fa56fbcb2852c65cf25cf4a40c20fc9103c19b61d086b37ea5e2833dfe004a48 not found: ID does not exist" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.809313 4675 scope.go:117] "RemoveContainer" containerID="9a0a4a27092db4ee5a7b774b49371eb5d9ebde194a20b31fe78f928cd981a738" Nov 21 15:00:10 crc kubenswrapper[4675]: E1121 15:00:10.809712 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a0a4a27092db4ee5a7b774b49371eb5d9ebde194a20b31fe78f928cd981a738\": container with ID starting with 9a0a4a27092db4ee5a7b774b49371eb5d9ebde194a20b31fe78f928cd981a738 not found: ID does not exist" containerID="9a0a4a27092db4ee5a7b774b49371eb5d9ebde194a20b31fe78f928cd981a738" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.809768 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a0a4a27092db4ee5a7b774b49371eb5d9ebde194a20b31fe78f928cd981a738"} err="failed to get container status \"9a0a4a27092db4ee5a7b774b49371eb5d9ebde194a20b31fe78f928cd981a738\": rpc error: code = NotFound desc = could not find container \"9a0a4a27092db4ee5a7b774b49371eb5d9ebde194a20b31fe78f928cd981a738\": container with ID starting with 9a0a4a27092db4ee5a7b774b49371eb5d9ebde194a20b31fe78f928cd981a738 not found: ID does not exist" Nov 21 15:00:10 crc kubenswrapper[4675]: E1121 15:00:10.851559 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea076f02_208f_417b_9296_825d712fcbc6.slice/crio-ccf2b9360603025911d284f71635cec9afa8b6850a3ee58803612c3d3acb6511\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea076f02_208f_417b_9296_825d712fcbc6.slice\": RecentStats: unable to find data in memory cache]" Nov 21 15:00:10 crc kubenswrapper[4675]: I1121 15:00:10.864413 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea076f02-208f-417b-9296-825d712fcbc6" path="/var/lib/kubelet/pods/ea076f02-208f-417b-9296-825d712fcbc6/volumes" Nov 21 15:00:11 crc kubenswrapper[4675]: I1121 15:00:11.849151 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 15:00:11 crc kubenswrapper[4675]: E1121 15:00:11.849945 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:00:22 crc kubenswrapper[4675]: I1121 15:00:22.849598 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 15:00:22 crc kubenswrapper[4675]: E1121 15:00:22.850848 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:00:32 crc kubenswrapper[4675]: I1121 15:00:32.322389 4675 scope.go:117] "RemoveContainer" containerID="03f2034863ed64c48b3d3706971adac16e8f10322220f45acb2223b7e63fb484" Nov 21 15:00:32 crc kubenswrapper[4675]: I1121 15:00:32.348500 4675 scope.go:117] "RemoveContainer" containerID="00b1052d53e4038dc3e546a923555b382c0cb5c3b7f2b0f5ecef05c5908b06fd" Nov 21 15:00:32 crc kubenswrapper[4675]: I1121 15:00:32.372301 4675 scope.go:117] "RemoveContainer" containerID="c4f72e9e0433ec461974e34ac7f3cbcb107ab883e44ee2f82137fad290dc788a" Nov 21 15:00:32 crc kubenswrapper[4675]: I1121 15:00:32.465945 4675 scope.go:117] "RemoveContainer" containerID="9d9ef87bb3fe1b4eb4e6e2b37848de2d6ced188947eee847d9e9b3b92372cd4b" Nov 21 15:00:32 crc kubenswrapper[4675]: I1121 15:00:32.573747 4675 scope.go:117] "RemoveContainer" containerID="9ae956b0f11f4f919c17deec52fed74e32713d9e74841d43753a88af9d02cf20" Nov 21 15:00:34 crc kubenswrapper[4675]: I1121 15:00:34.856651 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 15:00:34 crc kubenswrapper[4675]: E1121 15:00:34.858608 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:00:47 crc kubenswrapper[4675]: I1121 15:00:47.849788 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 15:00:47 crc kubenswrapper[4675]: E1121 15:00:47.850716 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.154638 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29395621-7g6lb"] Nov 21 15:01:00 crc kubenswrapper[4675]: E1121 15:01:00.156129 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b33b49-0c69-4a37-be45-4ccf3dedde32" containerName="collect-profiles" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.156169 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b33b49-0c69-4a37-be45-4ccf3dedde32" containerName="collect-profiles" Nov 21 15:01:00 crc kubenswrapper[4675]: E1121 15:01:00.156193 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea076f02-208f-417b-9296-825d712fcbc6" containerName="registry-server" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.156202 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea076f02-208f-417b-9296-825d712fcbc6" containerName="registry-server" Nov 21 15:01:00 crc kubenswrapper[4675]: E1121 15:01:00.156221 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea076f02-208f-417b-9296-825d712fcbc6" containerName="extract-content" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.156227 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea076f02-208f-417b-9296-825d712fcbc6" containerName="extract-content" Nov 21 15:01:00 crc kubenswrapper[4675]: E1121 15:01:00.156248 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea076f02-208f-417b-9296-825d712fcbc6" containerName="extract-utilities" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.156257 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea076f02-208f-417b-9296-825d712fcbc6" containerName="extract-utilities" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.156523 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea076f02-208f-417b-9296-825d712fcbc6" containerName="registry-server" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.156559 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b33b49-0c69-4a37-be45-4ccf3dedde32" containerName="collect-profiles" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.158776 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.169920 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29395621-7g6lb"] Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.206855 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn4st\" (UniqueName: \"kubernetes.io/projected/06506868-9f56-4ca3-870a-bf6062173504-kube-api-access-jn4st\") pod \"keystone-cron-29395621-7g6lb\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.206917 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-combined-ca-bundle\") pod \"keystone-cron-29395621-7g6lb\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.206958 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-config-data\") pod \"keystone-cron-29395621-7g6lb\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.207545 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-fernet-keys\") pod \"keystone-cron-29395621-7g6lb\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.310170 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn4st\" (UniqueName: \"kubernetes.io/projected/06506868-9f56-4ca3-870a-bf6062173504-kube-api-access-jn4st\") pod \"keystone-cron-29395621-7g6lb\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.310237 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-combined-ca-bundle\") pod \"keystone-cron-29395621-7g6lb\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.310273 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-config-data\") pod \"keystone-cron-29395621-7g6lb\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.310359 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-fernet-keys\") pod \"keystone-cron-29395621-7g6lb\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.317164 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-fernet-keys\") pod \"keystone-cron-29395621-7g6lb\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.317601 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-combined-ca-bundle\") pod \"keystone-cron-29395621-7g6lb\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.318447 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-config-data\") pod \"keystone-cron-29395621-7g6lb\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.325472 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn4st\" (UniqueName: \"kubernetes.io/projected/06506868-9f56-4ca3-870a-bf6062173504-kube-api-access-jn4st\") pod \"keystone-cron-29395621-7g6lb\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.495041 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.850193 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 15:01:00 crc kubenswrapper[4675]: E1121 15:01:00.850893 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:01:00 crc kubenswrapper[4675]: I1121 15:01:00.992423 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29395621-7g6lb"] Nov 21 15:01:01 crc kubenswrapper[4675]: W1121 15:01:01.000038 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06506868_9f56_4ca3_870a_bf6062173504.slice/crio-6bf17f42841a2bf1f3d57a941feebc4ed09c3dcd15864674c7cd213054a73b21 WatchSource:0}: Error finding container 6bf17f42841a2bf1f3d57a941feebc4ed09c3dcd15864674c7cd213054a73b21: Status 404 returned error can't find the container with id 6bf17f42841a2bf1f3d57a941feebc4ed09c3dcd15864674c7cd213054a73b21 Nov 21 15:01:01 crc kubenswrapper[4675]: I1121 15:01:01.245542 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395621-7g6lb" event={"ID":"06506868-9f56-4ca3-870a-bf6062173504","Type":"ContainerStarted","Data":"04f75a6755e4d948bababf7dc5a30c3b0679b6b99efee99508de312d530881c9"} Nov 21 15:01:01 crc kubenswrapper[4675]: I1121 15:01:01.245838 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395621-7g6lb" event={"ID":"06506868-9f56-4ca3-870a-bf6062173504","Type":"ContainerStarted","Data":"6bf17f42841a2bf1f3d57a941feebc4ed09c3dcd15864674c7cd213054a73b21"} Nov 21 15:01:01 crc kubenswrapper[4675]: I1121 15:01:01.279144 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29395621-7g6lb" podStartSLOduration=1.279121711 podStartE2EDuration="1.279121711s" podCreationTimestamp="2025-11-21 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:01:01.266403623 +0000 UTC m=+5337.992818400" watchObservedRunningTime="2025-11-21 15:01:01.279121711 +0000 UTC m=+5338.005536448" Nov 21 15:01:06 crc kubenswrapper[4675]: I1121 15:01:06.318726 4675 generic.go:334] "Generic (PLEG): container finished" podID="06506868-9f56-4ca3-870a-bf6062173504" containerID="04f75a6755e4d948bababf7dc5a30c3b0679b6b99efee99508de312d530881c9" exitCode=0 Nov 21 15:01:06 crc kubenswrapper[4675]: I1121 15:01:06.318845 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395621-7g6lb" event={"ID":"06506868-9f56-4ca3-870a-bf6062173504","Type":"ContainerDied","Data":"04f75a6755e4d948bababf7dc5a30c3b0679b6b99efee99508de312d530881c9"} Nov 21 15:01:07 crc kubenswrapper[4675]: I1121 15:01:07.761441 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:07 crc kubenswrapper[4675]: I1121 15:01:07.896389 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-combined-ca-bundle\") pod \"06506868-9f56-4ca3-870a-bf6062173504\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " Nov 21 15:01:07 crc kubenswrapper[4675]: I1121 15:01:07.896543 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn4st\" (UniqueName: \"kubernetes.io/projected/06506868-9f56-4ca3-870a-bf6062173504-kube-api-access-jn4st\") pod \"06506868-9f56-4ca3-870a-bf6062173504\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " Nov 21 15:01:07 crc kubenswrapper[4675]: I1121 15:01:07.896705 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-config-data\") pod \"06506868-9f56-4ca3-870a-bf6062173504\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " Nov 21 15:01:07 crc kubenswrapper[4675]: I1121 15:01:07.896763 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-fernet-keys\") pod \"06506868-9f56-4ca3-870a-bf6062173504\" (UID: \"06506868-9f56-4ca3-870a-bf6062173504\") " Nov 21 15:01:07 crc kubenswrapper[4675]: I1121 15:01:07.902371 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06506868-9f56-4ca3-870a-bf6062173504-kube-api-access-jn4st" (OuterVolumeSpecName: "kube-api-access-jn4st") pod "06506868-9f56-4ca3-870a-bf6062173504" (UID: "06506868-9f56-4ca3-870a-bf6062173504"). InnerVolumeSpecName "kube-api-access-jn4st". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:01:07 crc kubenswrapper[4675]: I1121 15:01:07.902443 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "06506868-9f56-4ca3-870a-bf6062173504" (UID: "06506868-9f56-4ca3-870a-bf6062173504"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:07 crc kubenswrapper[4675]: I1121 15:01:07.946250 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06506868-9f56-4ca3-870a-bf6062173504" (UID: "06506868-9f56-4ca3-870a-bf6062173504"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:07 crc kubenswrapper[4675]: I1121 15:01:07.972048 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-config-data" (OuterVolumeSpecName: "config-data") pod "06506868-9f56-4ca3-870a-bf6062173504" (UID: "06506868-9f56-4ca3-870a-bf6062173504"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:08 crc kubenswrapper[4675]: I1121 15:01:08.000546 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:08 crc kubenswrapper[4675]: I1121 15:01:08.000886 4675 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:08 crc kubenswrapper[4675]: I1121 15:01:08.000901 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06506868-9f56-4ca3-870a-bf6062173504-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:08 crc kubenswrapper[4675]: I1121 15:01:08.000918 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn4st\" (UniqueName: \"kubernetes.io/projected/06506868-9f56-4ca3-870a-bf6062173504-kube-api-access-jn4st\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:08 crc kubenswrapper[4675]: I1121 15:01:08.343216 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395621-7g6lb" event={"ID":"06506868-9f56-4ca3-870a-bf6062173504","Type":"ContainerDied","Data":"6bf17f42841a2bf1f3d57a941feebc4ed09c3dcd15864674c7cd213054a73b21"} Nov 21 15:01:08 crc kubenswrapper[4675]: I1121 15:01:08.343279 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf17f42841a2bf1f3d57a941feebc4ed09c3dcd15864674c7cd213054a73b21" Nov 21 15:01:08 crc kubenswrapper[4675]: I1121 15:01:08.343286 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395621-7g6lb" Nov 21 15:01:11 crc kubenswrapper[4675]: I1121 15:01:11.849357 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 15:01:11 crc kubenswrapper[4675]: E1121 15:01:11.849864 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:01:26 crc kubenswrapper[4675]: I1121 15:01:26.849175 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 15:01:26 crc kubenswrapper[4675]: E1121 15:01:26.850261 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:01:38 crc kubenswrapper[4675]: I1121 15:01:38.848765 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 15:01:38 crc kubenswrapper[4675]: E1121 15:01:38.849653 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:01:52 crc kubenswrapper[4675]: I1121 15:01:52.849289 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 15:01:52 crc kubenswrapper[4675]: E1121 15:01:52.850385 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:02:06 crc kubenswrapper[4675]: I1121 15:02:06.850975 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 15:02:06 crc kubenswrapper[4675]: E1121 15:02:06.852345 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:02:20 crc kubenswrapper[4675]: I1121 15:02:20.850604 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 15:02:21 crc kubenswrapper[4675]: I1121 15:02:21.265362 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"fbb8211188afd058cb0c631c9db5272c834697e9ee63bbb77dff2aea6b76b1be"} Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.564592 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-drdpf"] Nov 21 15:04:37 crc kubenswrapper[4675]: E1121 15:04:37.566028 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06506868-9f56-4ca3-870a-bf6062173504" containerName="keystone-cron" Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.566045 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="06506868-9f56-4ca3-870a-bf6062173504" containerName="keystone-cron" Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.566347 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="06506868-9f56-4ca3-870a-bf6062173504" containerName="keystone-cron" Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.568219 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.576985 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-drdpf"] Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.634291 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b02f48d-11ff-44e8-b68a-850910d0edea-utilities\") pod \"redhat-marketplace-drdpf\" (UID: \"3b02f48d-11ff-44e8-b68a-850910d0edea\") " pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.634409 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sf4t\" (UniqueName: \"kubernetes.io/projected/3b02f48d-11ff-44e8-b68a-850910d0edea-kube-api-access-8sf4t\") pod \"redhat-marketplace-drdpf\" (UID: \"3b02f48d-11ff-44e8-b68a-850910d0edea\") " pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.634612 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b02f48d-11ff-44e8-b68a-850910d0edea-catalog-content\") pod \"redhat-marketplace-drdpf\" (UID: \"3b02f48d-11ff-44e8-b68a-850910d0edea\") " pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.737081 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b02f48d-11ff-44e8-b68a-850910d0edea-catalog-content\") pod \"redhat-marketplace-drdpf\" (UID: \"3b02f48d-11ff-44e8-b68a-850910d0edea\") " pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.737259 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b02f48d-11ff-44e8-b68a-850910d0edea-utilities\") pod \"redhat-marketplace-drdpf\" (UID: \"3b02f48d-11ff-44e8-b68a-850910d0edea\") " pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.737362 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sf4t\" (UniqueName: \"kubernetes.io/projected/3b02f48d-11ff-44e8-b68a-850910d0edea-kube-api-access-8sf4t\") pod \"redhat-marketplace-drdpf\" (UID: \"3b02f48d-11ff-44e8-b68a-850910d0edea\") " pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.741262 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b02f48d-11ff-44e8-b68a-850910d0edea-catalog-content\") pod \"redhat-marketplace-drdpf\" (UID: \"3b02f48d-11ff-44e8-b68a-850910d0edea\") " pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.744129 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b02f48d-11ff-44e8-b68a-850910d0edea-utilities\") pod \"redhat-marketplace-drdpf\" (UID: \"3b02f48d-11ff-44e8-b68a-850910d0edea\") " pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.791778 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sf4t\" (UniqueName: \"kubernetes.io/projected/3b02f48d-11ff-44e8-b68a-850910d0edea-kube-api-access-8sf4t\") pod \"redhat-marketplace-drdpf\" (UID: \"3b02f48d-11ff-44e8-b68a-850910d0edea\") " pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:37 crc kubenswrapper[4675]: I1121 15:04:37.916890 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:38 crc kubenswrapper[4675]: I1121 15:04:38.602757 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-drdpf"] Nov 21 15:04:38 crc kubenswrapper[4675]: I1121 15:04:38.967253 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drdpf" event={"ID":"3b02f48d-11ff-44e8-b68a-850910d0edea","Type":"ContainerStarted","Data":"05c7a34cd615eb64adbcc57eeb60b434c7d44e55735db4ee0c7b997b675a47a8"} Nov 21 15:04:38 crc kubenswrapper[4675]: I1121 15:04:38.967304 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drdpf" event={"ID":"3b02f48d-11ff-44e8-b68a-850910d0edea","Type":"ContainerStarted","Data":"c7a59b57190d5bf025be9cc7e78087c8d9933dd293a82280438b4e67d507d13f"} Nov 21 15:04:39 crc kubenswrapper[4675]: I1121 15:04:39.979719 4675 generic.go:334] "Generic (PLEG): container finished" podID="3b02f48d-11ff-44e8-b68a-850910d0edea" containerID="05c7a34cd615eb64adbcc57eeb60b434c7d44e55735db4ee0c7b997b675a47a8" exitCode=0 Nov 21 15:04:39 crc kubenswrapper[4675]: I1121 15:04:39.980011 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drdpf" event={"ID":"3b02f48d-11ff-44e8-b68a-850910d0edea","Type":"ContainerDied","Data":"05c7a34cd615eb64adbcc57eeb60b434c7d44e55735db4ee0c7b997b675a47a8"} Nov 21 15:04:42 crc kubenswrapper[4675]: I1121 15:04:42.001706 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drdpf" event={"ID":"3b02f48d-11ff-44e8-b68a-850910d0edea","Type":"ContainerStarted","Data":"db9c028d532a78cc36ad028f519ee0f189c07c92116b93b63bc5c0a8e49cc599"} Nov 21 15:04:44 crc kubenswrapper[4675]: I1121 15:04:44.038668 4675 generic.go:334] "Generic (PLEG): container finished" podID="3b02f48d-11ff-44e8-b68a-850910d0edea" containerID="db9c028d532a78cc36ad028f519ee0f189c07c92116b93b63bc5c0a8e49cc599" exitCode=0 Nov 21 15:04:44 crc kubenswrapper[4675]: I1121 15:04:44.038741 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drdpf" event={"ID":"3b02f48d-11ff-44e8-b68a-850910d0edea","Type":"ContainerDied","Data":"db9c028d532a78cc36ad028f519ee0f189c07c92116b93b63bc5c0a8e49cc599"} Nov 21 15:04:46 crc kubenswrapper[4675]: I1121 15:04:46.136554 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:04:46 crc kubenswrapper[4675]: I1121 15:04:46.136942 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:04:47 crc kubenswrapper[4675]: I1121 15:04:47.071923 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drdpf" event={"ID":"3b02f48d-11ff-44e8-b68a-850910d0edea","Type":"ContainerStarted","Data":"28ca47eac8d613ce3b59c577e6d624b7fe649d242af09efd5f2aafb4f4913654"} Nov 21 15:04:47 crc kubenswrapper[4675]: I1121 15:04:47.103683 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-drdpf" podStartSLOduration=3.6069109790000002 podStartE2EDuration="10.103661302s" podCreationTimestamp="2025-11-21 15:04:37 +0000 UTC" firstStartedPulling="2025-11-21 15:04:39.983046018 +0000 UTC m=+5556.709460745" lastFinishedPulling="2025-11-21 15:04:46.479796341 +0000 UTC m=+5563.206211068" observedRunningTime="2025-11-21 15:04:47.091281212 +0000 UTC m=+5563.817695959" watchObservedRunningTime="2025-11-21 15:04:47.103661302 +0000 UTC m=+5563.830076039" Nov 21 15:04:47 crc kubenswrapper[4675]: I1121 15:04:47.917185 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:47 crc kubenswrapper[4675]: I1121 15:04:47.917261 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:48 crc kubenswrapper[4675]: I1121 15:04:48.993268 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-drdpf" podUID="3b02f48d-11ff-44e8-b68a-850910d0edea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:04:48 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:04:48 crc kubenswrapper[4675]: > Nov 21 15:04:57 crc kubenswrapper[4675]: I1121 15:04:57.976357 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:58 crc kubenswrapper[4675]: I1121 15:04:58.025316 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:04:58 crc kubenswrapper[4675]: I1121 15:04:58.226021 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-drdpf"] Nov 21 15:04:59 crc kubenswrapper[4675]: I1121 15:04:59.210274 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-drdpf" podUID="3b02f48d-11ff-44e8-b68a-850910d0edea" containerName="registry-server" containerID="cri-o://28ca47eac8d613ce3b59c577e6d624b7fe649d242af09efd5f2aafb4f4913654" gracePeriod=2 Nov 21 15:05:00 crc kubenswrapper[4675]: I1121 15:05:00.226397 4675 generic.go:334] "Generic (PLEG): container finished" podID="3b02f48d-11ff-44e8-b68a-850910d0edea" containerID="28ca47eac8d613ce3b59c577e6d624b7fe649d242af09efd5f2aafb4f4913654" exitCode=0 Nov 21 15:05:00 crc kubenswrapper[4675]: I1121 15:05:00.227030 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drdpf" event={"ID":"3b02f48d-11ff-44e8-b68a-850910d0edea","Type":"ContainerDied","Data":"28ca47eac8d613ce3b59c577e6d624b7fe649d242af09efd5f2aafb4f4913654"} Nov 21 15:05:00 crc kubenswrapper[4675]: I1121 15:05:00.398911 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:05:00 crc kubenswrapper[4675]: I1121 15:05:00.512237 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b02f48d-11ff-44e8-b68a-850910d0edea-utilities\") pod \"3b02f48d-11ff-44e8-b68a-850910d0edea\" (UID: \"3b02f48d-11ff-44e8-b68a-850910d0edea\") " Nov 21 15:05:00 crc kubenswrapper[4675]: I1121 15:05:00.512511 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sf4t\" (UniqueName: \"kubernetes.io/projected/3b02f48d-11ff-44e8-b68a-850910d0edea-kube-api-access-8sf4t\") pod \"3b02f48d-11ff-44e8-b68a-850910d0edea\" (UID: \"3b02f48d-11ff-44e8-b68a-850910d0edea\") " Nov 21 15:05:00 crc kubenswrapper[4675]: I1121 15:05:00.512571 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b02f48d-11ff-44e8-b68a-850910d0edea-catalog-content\") pod \"3b02f48d-11ff-44e8-b68a-850910d0edea\" (UID: \"3b02f48d-11ff-44e8-b68a-850910d0edea\") " Nov 21 15:05:00 crc kubenswrapper[4675]: I1121 15:05:00.513224 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b02f48d-11ff-44e8-b68a-850910d0edea-utilities" (OuterVolumeSpecName: "utilities") pod "3b02f48d-11ff-44e8-b68a-850910d0edea" (UID: "3b02f48d-11ff-44e8-b68a-850910d0edea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:05:00 crc kubenswrapper[4675]: I1121 15:05:00.513376 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b02f48d-11ff-44e8-b68a-850910d0edea-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:00 crc kubenswrapper[4675]: I1121 15:05:00.543507 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b02f48d-11ff-44e8-b68a-850910d0edea-kube-api-access-8sf4t" (OuterVolumeSpecName: "kube-api-access-8sf4t") pod "3b02f48d-11ff-44e8-b68a-850910d0edea" (UID: "3b02f48d-11ff-44e8-b68a-850910d0edea"). InnerVolumeSpecName "kube-api-access-8sf4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:05:00 crc kubenswrapper[4675]: I1121 15:05:00.549772 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b02f48d-11ff-44e8-b68a-850910d0edea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b02f48d-11ff-44e8-b68a-850910d0edea" (UID: "3b02f48d-11ff-44e8-b68a-850910d0edea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:05:00 crc kubenswrapper[4675]: I1121 15:05:00.615387 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sf4t\" (UniqueName: \"kubernetes.io/projected/3b02f48d-11ff-44e8-b68a-850910d0edea-kube-api-access-8sf4t\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:00 crc kubenswrapper[4675]: I1121 15:05:00.615425 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b02f48d-11ff-44e8-b68a-850910d0edea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:01 crc kubenswrapper[4675]: I1121 15:05:01.239335 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drdpf" event={"ID":"3b02f48d-11ff-44e8-b68a-850910d0edea","Type":"ContainerDied","Data":"c7a59b57190d5bf025be9cc7e78087c8d9933dd293a82280438b4e67d507d13f"} Nov 21 15:05:01 crc kubenswrapper[4675]: I1121 15:05:01.239431 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-drdpf" Nov 21 15:05:01 crc kubenswrapper[4675]: I1121 15:05:01.239638 4675 scope.go:117] "RemoveContainer" containerID="28ca47eac8d613ce3b59c577e6d624b7fe649d242af09efd5f2aafb4f4913654" Nov 21 15:05:01 crc kubenswrapper[4675]: I1121 15:05:01.275854 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-drdpf"] Nov 21 15:05:01 crc kubenswrapper[4675]: I1121 15:05:01.283114 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-drdpf"] Nov 21 15:05:01 crc kubenswrapper[4675]: I1121 15:05:01.288833 4675 scope.go:117] "RemoveContainer" containerID="db9c028d532a78cc36ad028f519ee0f189c07c92116b93b63bc5c0a8e49cc599" Nov 21 15:05:01 crc kubenswrapper[4675]: I1121 15:05:01.320289 4675 scope.go:117] "RemoveContainer" containerID="05c7a34cd615eb64adbcc57eeb60b434c7d44e55735db4ee0c7b997b675a47a8" Nov 21 15:05:02 crc kubenswrapper[4675]: I1121 15:05:02.888085 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b02f48d-11ff-44e8-b68a-850910d0edea" path="/var/lib/kubelet/pods/3b02f48d-11ff-44e8-b68a-850910d0edea/volumes" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.208455 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wkpdv"] Nov 21 15:05:07 crc kubenswrapper[4675]: E1121 15:05:07.209620 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b02f48d-11ff-44e8-b68a-850910d0edea" containerName="registry-server" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.209640 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b02f48d-11ff-44e8-b68a-850910d0edea" containerName="registry-server" Nov 21 15:05:07 crc kubenswrapper[4675]: E1121 15:05:07.209719 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b02f48d-11ff-44e8-b68a-850910d0edea" containerName="extract-utilities" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.209730 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b02f48d-11ff-44e8-b68a-850910d0edea" containerName="extract-utilities" Nov 21 15:05:07 crc kubenswrapper[4675]: E1121 15:05:07.209794 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b02f48d-11ff-44e8-b68a-850910d0edea" containerName="extract-content" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.209802 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b02f48d-11ff-44e8-b68a-850910d0edea" containerName="extract-content" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.210094 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b02f48d-11ff-44e8-b68a-850910d0edea" containerName="registry-server" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.212326 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.223341 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wkpdv"] Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.289873 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de56990-46be-4347-9840-caed10be0def-utilities\") pod \"community-operators-wkpdv\" (UID: \"4de56990-46be-4347-9840-caed10be0def\") " pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.290325 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mlbd\" (UniqueName: \"kubernetes.io/projected/4de56990-46be-4347-9840-caed10be0def-kube-api-access-4mlbd\") pod \"community-operators-wkpdv\" (UID: \"4de56990-46be-4347-9840-caed10be0def\") " pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.290454 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de56990-46be-4347-9840-caed10be0def-catalog-content\") pod \"community-operators-wkpdv\" (UID: \"4de56990-46be-4347-9840-caed10be0def\") " pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.393217 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mlbd\" (UniqueName: \"kubernetes.io/projected/4de56990-46be-4347-9840-caed10be0def-kube-api-access-4mlbd\") pod \"community-operators-wkpdv\" (UID: \"4de56990-46be-4347-9840-caed10be0def\") " pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.393279 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de56990-46be-4347-9840-caed10be0def-catalog-content\") pod \"community-operators-wkpdv\" (UID: \"4de56990-46be-4347-9840-caed10be0def\") " pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.393459 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de56990-46be-4347-9840-caed10be0def-utilities\") pod \"community-operators-wkpdv\" (UID: \"4de56990-46be-4347-9840-caed10be0def\") " pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.394045 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de56990-46be-4347-9840-caed10be0def-catalog-content\") pod \"community-operators-wkpdv\" (UID: \"4de56990-46be-4347-9840-caed10be0def\") " pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.394090 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de56990-46be-4347-9840-caed10be0def-utilities\") pod \"community-operators-wkpdv\" (UID: \"4de56990-46be-4347-9840-caed10be0def\") " pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.419139 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mlbd\" (UniqueName: \"kubernetes.io/projected/4de56990-46be-4347-9840-caed10be0def-kube-api-access-4mlbd\") pod \"community-operators-wkpdv\" (UID: \"4de56990-46be-4347-9840-caed10be0def\") " pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:07 crc kubenswrapper[4675]: I1121 15:05:07.543790 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:08 crc kubenswrapper[4675]: I1121 15:05:08.192904 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wkpdv"] Nov 21 15:05:08 crc kubenswrapper[4675]: I1121 15:05:08.338992 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkpdv" event={"ID":"4de56990-46be-4347-9840-caed10be0def","Type":"ContainerStarted","Data":"4f05b0e96c5268396a6b489282078158daeb1c85b83ed1d211465c7c02442a89"} Nov 21 15:05:09 crc kubenswrapper[4675]: I1121 15:05:09.352380 4675 generic.go:334] "Generic (PLEG): container finished" podID="4de56990-46be-4347-9840-caed10be0def" containerID="b370d08adfe3f1cc82f52a4012b93dd591a6815da811f7669eb186c8aea90fc5" exitCode=0 Nov 21 15:05:09 crc kubenswrapper[4675]: I1121 15:05:09.352500 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkpdv" event={"ID":"4de56990-46be-4347-9840-caed10be0def","Type":"ContainerDied","Data":"b370d08adfe3f1cc82f52a4012b93dd591a6815da811f7669eb186c8aea90fc5"} Nov 21 15:05:09 crc kubenswrapper[4675]: I1121 15:05:09.356132 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 15:05:10 crc kubenswrapper[4675]: I1121 15:05:10.364000 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkpdv" event={"ID":"4de56990-46be-4347-9840-caed10be0def","Type":"ContainerStarted","Data":"37781fd59d1f2b1eea998ec839c17742db164f62a5aa7498f91f221eed0b99f9"} Nov 21 15:05:15 crc kubenswrapper[4675]: I1121 15:05:15.418002 4675 generic.go:334] "Generic (PLEG): container finished" podID="4de56990-46be-4347-9840-caed10be0def" containerID="37781fd59d1f2b1eea998ec839c17742db164f62a5aa7498f91f221eed0b99f9" exitCode=0 Nov 21 15:05:15 crc kubenswrapper[4675]: I1121 15:05:15.418059 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkpdv" event={"ID":"4de56990-46be-4347-9840-caed10be0def","Type":"ContainerDied","Data":"37781fd59d1f2b1eea998ec839c17742db164f62a5aa7498f91f221eed0b99f9"} Nov 21 15:05:16 crc kubenswrapper[4675]: I1121 15:05:16.136247 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:05:16 crc kubenswrapper[4675]: I1121 15:05:16.136620 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:05:17 crc kubenswrapper[4675]: I1121 15:05:17.452693 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkpdv" event={"ID":"4de56990-46be-4347-9840-caed10be0def","Type":"ContainerStarted","Data":"9ffab81ac5c7095ada512fb4f8479201389d10d520d5ee10fe479f5a8edfcd8f"} Nov 21 15:05:17 crc kubenswrapper[4675]: I1121 15:05:17.480372 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wkpdv" podStartSLOduration=3.348466152 podStartE2EDuration="10.480336407s" podCreationTimestamp="2025-11-21 15:05:07 +0000 UTC" firstStartedPulling="2025-11-21 15:05:09.355824501 +0000 UTC m=+5586.082239228" lastFinishedPulling="2025-11-21 15:05:16.487694756 +0000 UTC m=+5593.214109483" observedRunningTime="2025-11-21 15:05:17.473741762 +0000 UTC m=+5594.200156509" watchObservedRunningTime="2025-11-21 15:05:17.480336407 +0000 UTC m=+5594.206751134" Nov 21 15:05:17 crc kubenswrapper[4675]: I1121 15:05:17.545662 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:17 crc kubenswrapper[4675]: I1121 15:05:17.545711 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:18 crc kubenswrapper[4675]: I1121 15:05:18.615912 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wkpdv" podUID="4de56990-46be-4347-9840-caed10be0def" containerName="registry-server" probeResult="failure" output=< Nov 21 15:05:18 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:05:18 crc kubenswrapper[4675]: > Nov 21 15:05:25 crc kubenswrapper[4675]: I1121 15:05:25.680809 4675 generic.go:334] "Generic (PLEG): container finished" podID="71faa523-7927-4fc1-bb12-0f787758620a" containerID="7224b19184a0338249fafd727fcb725b586ee7dd5f53fd0cb299c29069007f47" exitCode=0 Nov 21 15:05:25 crc kubenswrapper[4675]: I1121 15:05:25.680927 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"71faa523-7927-4fc1-bb12-0f787758620a","Type":"ContainerDied","Data":"7224b19184a0338249fafd727fcb725b586ee7dd5f53fd0cb299c29069007f47"} Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.184379 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.328375 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-ssh-key\") pod \"71faa523-7927-4fc1-bb12-0f787758620a\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.328818 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/71faa523-7927-4fc1-bb12-0f787758620a-test-operator-ephemeral-workdir\") pod \"71faa523-7927-4fc1-bb12-0f787758620a\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.328844 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/71faa523-7927-4fc1-bb12-0f787758620a-openstack-config\") pod \"71faa523-7927-4fc1-bb12-0f787758620a\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.328928 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"71faa523-7927-4fc1-bb12-0f787758620a\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.328980 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-openstack-config-secret\") pod \"71faa523-7927-4fc1-bb12-0f787758620a\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.329158 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71faa523-7927-4fc1-bb12-0f787758620a-config-data\") pod \"71faa523-7927-4fc1-bb12-0f787758620a\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.329212 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/71faa523-7927-4fc1-bb12-0f787758620a-test-operator-ephemeral-temporary\") pod \"71faa523-7927-4fc1-bb12-0f787758620a\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.329260 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b22t\" (UniqueName: \"kubernetes.io/projected/71faa523-7927-4fc1-bb12-0f787758620a-kube-api-access-5b22t\") pod \"71faa523-7927-4fc1-bb12-0f787758620a\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.329287 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-ca-certs\") pod \"71faa523-7927-4fc1-bb12-0f787758620a\" (UID: \"71faa523-7927-4fc1-bb12-0f787758620a\") " Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.332152 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71faa523-7927-4fc1-bb12-0f787758620a-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "71faa523-7927-4fc1-bb12-0f787758620a" (UID: "71faa523-7927-4fc1-bb12-0f787758620a"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.334025 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71faa523-7927-4fc1-bb12-0f787758620a-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "71faa523-7927-4fc1-bb12-0f787758620a" (UID: "71faa523-7927-4fc1-bb12-0f787758620a"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.334208 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71faa523-7927-4fc1-bb12-0f787758620a-config-data" (OuterVolumeSpecName: "config-data") pod "71faa523-7927-4fc1-bb12-0f787758620a" (UID: "71faa523-7927-4fc1-bb12-0f787758620a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.337589 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71faa523-7927-4fc1-bb12-0f787758620a-kube-api-access-5b22t" (OuterVolumeSpecName: "kube-api-access-5b22t") pod "71faa523-7927-4fc1-bb12-0f787758620a" (UID: "71faa523-7927-4fc1-bb12-0f787758620a"). InnerVolumeSpecName "kube-api-access-5b22t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.343597 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "71faa523-7927-4fc1-bb12-0f787758620a" (UID: "71faa523-7927-4fc1-bb12-0f787758620a"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.363581 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "71faa523-7927-4fc1-bb12-0f787758620a" (UID: "71faa523-7927-4fc1-bb12-0f787758620a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.369616 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "71faa523-7927-4fc1-bb12-0f787758620a" (UID: "71faa523-7927-4fc1-bb12-0f787758620a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.372954 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "71faa523-7927-4fc1-bb12-0f787758620a" (UID: "71faa523-7927-4fc1-bb12-0f787758620a"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.432723 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.432771 4675 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/71faa523-7927-4fc1-bb12-0f787758620a-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.432830 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.432846 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.432860 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71faa523-7927-4fc1-bb12-0f787758620a-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.432872 4675 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/71faa523-7927-4fc1-bb12-0f787758620a-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.432883 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b22t\" (UniqueName: \"kubernetes.io/projected/71faa523-7927-4fc1-bb12-0f787758620a-kube-api-access-5b22t\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.432894 4675 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/71faa523-7927-4fc1-bb12-0f787758620a-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.702304 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"71faa523-7927-4fc1-bb12-0f787758620a","Type":"ContainerDied","Data":"d52e3f21b8823a47a35cb23b1388e9db5be6cec06f8e9dc0b471924b29459d2b"} Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.702351 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d52e3f21b8823a47a35cb23b1388e9db5be6cec06f8e9dc0b471924b29459d2b" Nov 21 15:05:27 crc kubenswrapper[4675]: I1121 15:05:27.702369 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 21 15:05:28 crc kubenswrapper[4675]: I1121 15:05:28.121213 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71faa523-7927-4fc1-bb12-0f787758620a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "71faa523-7927-4fc1-bb12-0f787758620a" (UID: "71faa523-7927-4fc1-bb12-0f787758620a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:05:28 crc kubenswrapper[4675]: I1121 15:05:28.126484 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 21 15:05:28 crc kubenswrapper[4675]: I1121 15:05:28.155754 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/71faa523-7927-4fc1-bb12-0f787758620a-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:28 crc kubenswrapper[4675]: I1121 15:05:28.155793 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:29 crc kubenswrapper[4675]: I1121 15:05:29.154442 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wkpdv" podUID="4de56990-46be-4347-9840-caed10be0def" containerName="registry-server" probeResult="failure" output=< Nov 21 15:05:29 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:05:29 crc kubenswrapper[4675]: > Nov 21 15:05:37 crc kubenswrapper[4675]: I1121 15:05:37.624125 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:37 crc kubenswrapper[4675]: I1121 15:05:37.677097 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:38 crc kubenswrapper[4675]: I1121 15:05:38.412265 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wkpdv"] Nov 21 15:05:38 crc kubenswrapper[4675]: I1121 15:05:38.817860 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wkpdv" podUID="4de56990-46be-4347-9840-caed10be0def" containerName="registry-server" containerID="cri-o://9ffab81ac5c7095ada512fb4f8479201389d10d520d5ee10fe479f5a8edfcd8f" gracePeriod=2 Nov 21 15:05:39 crc kubenswrapper[4675]: I1121 15:05:39.839906 4675 generic.go:334] "Generic (PLEG): container finished" podID="4de56990-46be-4347-9840-caed10be0def" containerID="9ffab81ac5c7095ada512fb4f8479201389d10d520d5ee10fe479f5a8edfcd8f" exitCode=0 Nov 21 15:05:39 crc kubenswrapper[4675]: I1121 15:05:39.839986 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkpdv" event={"ID":"4de56990-46be-4347-9840-caed10be0def","Type":"ContainerDied","Data":"9ffab81ac5c7095ada512fb4f8479201389d10d520d5ee10fe479f5a8edfcd8f"} Nov 21 15:05:39 crc kubenswrapper[4675]: I1121 15:05:39.840433 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkpdv" event={"ID":"4de56990-46be-4347-9840-caed10be0def","Type":"ContainerDied","Data":"4f05b0e96c5268396a6b489282078158daeb1c85b83ed1d211465c7c02442a89"} Nov 21 15:05:39 crc kubenswrapper[4675]: I1121 15:05:39.840469 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f05b0e96c5268396a6b489282078158daeb1c85b83ed1d211465c7c02442a89" Nov 21 15:05:39 crc kubenswrapper[4675]: I1121 15:05:39.890340 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.049640 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mlbd\" (UniqueName: \"kubernetes.io/projected/4de56990-46be-4347-9840-caed10be0def-kube-api-access-4mlbd\") pod \"4de56990-46be-4347-9840-caed10be0def\" (UID: \"4de56990-46be-4347-9840-caed10be0def\") " Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.049764 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de56990-46be-4347-9840-caed10be0def-catalog-content\") pod \"4de56990-46be-4347-9840-caed10be0def\" (UID: \"4de56990-46be-4347-9840-caed10be0def\") " Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.049845 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de56990-46be-4347-9840-caed10be0def-utilities\") pod \"4de56990-46be-4347-9840-caed10be0def\" (UID: \"4de56990-46be-4347-9840-caed10be0def\") " Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.050844 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4de56990-46be-4347-9840-caed10be0def-utilities" (OuterVolumeSpecName: "utilities") pod "4de56990-46be-4347-9840-caed10be0def" (UID: "4de56990-46be-4347-9840-caed10be0def"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.058671 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de56990-46be-4347-9840-caed10be0def-kube-api-access-4mlbd" (OuterVolumeSpecName: "kube-api-access-4mlbd") pod "4de56990-46be-4347-9840-caed10be0def" (UID: "4de56990-46be-4347-9840-caed10be0def"). InnerVolumeSpecName "kube-api-access-4mlbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.116413 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4de56990-46be-4347-9840-caed10be0def-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4de56990-46be-4347-9840-caed10be0def" (UID: "4de56990-46be-4347-9840-caed10be0def"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.154288 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mlbd\" (UniqueName: \"kubernetes.io/projected/4de56990-46be-4347-9840-caed10be0def-kube-api-access-4mlbd\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.154369 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de56990-46be-4347-9840-caed10be0def-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.154380 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de56990-46be-4347-9840-caed10be0def-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.184304 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 21 15:05:40 crc kubenswrapper[4675]: E1121 15:05:40.185005 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de56990-46be-4347-9840-caed10be0def" containerName="extract-content" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.185038 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de56990-46be-4347-9840-caed10be0def" containerName="extract-content" Nov 21 15:05:40 crc kubenswrapper[4675]: E1121 15:05:40.185064 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71faa523-7927-4fc1-bb12-0f787758620a" containerName="tempest-tests-tempest-tests-runner" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.185106 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="71faa523-7927-4fc1-bb12-0f787758620a" containerName="tempest-tests-tempest-tests-runner" Nov 21 15:05:40 crc kubenswrapper[4675]: E1121 15:05:40.185131 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de56990-46be-4347-9840-caed10be0def" containerName="registry-server" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.185151 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de56990-46be-4347-9840-caed10be0def" containerName="registry-server" Nov 21 15:05:40 crc kubenswrapper[4675]: E1121 15:05:40.185186 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de56990-46be-4347-9840-caed10be0def" containerName="extract-utilities" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.185203 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de56990-46be-4347-9840-caed10be0def" containerName="extract-utilities" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.186453 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de56990-46be-4347-9840-caed10be0def" containerName="registry-server" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.186816 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="71faa523-7927-4fc1-bb12-0f787758620a" containerName="tempest-tests-tempest-tests-runner" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.187990 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.192464 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pw5wd" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.195408 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.358521 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtzfw\" (UniqueName: \"kubernetes.io/projected/bdd9ab23-ad3b-4d5e-8f26-6c6cf175fa94-kube-api-access-gtzfw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bdd9ab23-ad3b-4d5e-8f26-6c6cf175fa94\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.358605 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bdd9ab23-ad3b-4d5e-8f26-6c6cf175fa94\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.460558 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtzfw\" (UniqueName: \"kubernetes.io/projected/bdd9ab23-ad3b-4d5e-8f26-6c6cf175fa94-kube-api-access-gtzfw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bdd9ab23-ad3b-4d5e-8f26-6c6cf175fa94\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.460629 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bdd9ab23-ad3b-4d5e-8f26-6c6cf175fa94\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.462458 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bdd9ab23-ad3b-4d5e-8f26-6c6cf175fa94\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.485146 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtzfw\" (UniqueName: \"kubernetes.io/projected/bdd9ab23-ad3b-4d5e-8f26-6c6cf175fa94-kube-api-access-gtzfw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bdd9ab23-ad3b-4d5e-8f26-6c6cf175fa94\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.498003 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bdd9ab23-ad3b-4d5e-8f26-6c6cf175fa94\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.514664 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.848800 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkpdv" Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.910866 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wkpdv"] Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.925786 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wkpdv"] Nov 21 15:05:40 crc kubenswrapper[4675]: I1121 15:05:40.995346 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 21 15:05:41 crc kubenswrapper[4675]: I1121 15:05:41.862957 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"bdd9ab23-ad3b-4d5e-8f26-6c6cf175fa94","Type":"ContainerStarted","Data":"6e0b0cb28dec3977dd896803d03dc4b2c0aea3d8710073d2898326eeacd3eed2"} Nov 21 15:05:42 crc kubenswrapper[4675]: I1121 15:05:42.864135 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de56990-46be-4347-9840-caed10be0def" path="/var/lib/kubelet/pods/4de56990-46be-4347-9840-caed10be0def/volumes" Nov 21 15:05:42 crc kubenswrapper[4675]: I1121 15:05:42.876496 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"bdd9ab23-ad3b-4d5e-8f26-6c6cf175fa94","Type":"ContainerStarted","Data":"464c280b54485fecb7cdefc6c51b504c52ab950190ceee836e8956ef10c846df"} Nov 21 15:05:42 crc kubenswrapper[4675]: I1121 15:05:42.894139 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.689539347 podStartE2EDuration="2.894116088s" podCreationTimestamp="2025-11-21 15:05:40 +0000 UTC" firstStartedPulling="2025-11-21 15:05:40.994409001 +0000 UTC m=+5617.720823728" lastFinishedPulling="2025-11-21 15:05:42.198985742 +0000 UTC m=+5618.925400469" observedRunningTime="2025-11-21 15:05:42.888655091 +0000 UTC m=+5619.615069818" watchObservedRunningTime="2025-11-21 15:05:42.894116088 +0000 UTC m=+5619.620530815" Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.017936 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7254z"] Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.020582 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.035880 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7254z"] Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.129914 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60cf358-02ab-4432-b283-ccb3366f9bea-utilities\") pod \"redhat-operators-7254z\" (UID: \"f60cf358-02ab-4432-b283-ccb3366f9bea\") " pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.130155 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgx9x\" (UniqueName: \"kubernetes.io/projected/f60cf358-02ab-4432-b283-ccb3366f9bea-kube-api-access-mgx9x\") pod \"redhat-operators-7254z\" (UID: \"f60cf358-02ab-4432-b283-ccb3366f9bea\") " pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.130226 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60cf358-02ab-4432-b283-ccb3366f9bea-catalog-content\") pod \"redhat-operators-7254z\" (UID: \"f60cf358-02ab-4432-b283-ccb3366f9bea\") " pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.232481 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60cf358-02ab-4432-b283-ccb3366f9bea-utilities\") pod \"redhat-operators-7254z\" (UID: \"f60cf358-02ab-4432-b283-ccb3366f9bea\") " pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.232597 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgx9x\" (UniqueName: \"kubernetes.io/projected/f60cf358-02ab-4432-b283-ccb3366f9bea-kube-api-access-mgx9x\") pod \"redhat-operators-7254z\" (UID: \"f60cf358-02ab-4432-b283-ccb3366f9bea\") " pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.232629 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60cf358-02ab-4432-b283-ccb3366f9bea-catalog-content\") pod \"redhat-operators-7254z\" (UID: \"f60cf358-02ab-4432-b283-ccb3366f9bea\") " pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.232960 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60cf358-02ab-4432-b283-ccb3366f9bea-utilities\") pod \"redhat-operators-7254z\" (UID: \"f60cf358-02ab-4432-b283-ccb3366f9bea\") " pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.234191 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60cf358-02ab-4432-b283-ccb3366f9bea-catalog-content\") pod \"redhat-operators-7254z\" (UID: \"f60cf358-02ab-4432-b283-ccb3366f9bea\") " pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.254869 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgx9x\" (UniqueName: \"kubernetes.io/projected/f60cf358-02ab-4432-b283-ccb3366f9bea-kube-api-access-mgx9x\") pod \"redhat-operators-7254z\" (UID: \"f60cf358-02ab-4432-b283-ccb3366f9bea\") " pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.344629 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.868277 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7254z"] Nov 21 15:05:43 crc kubenswrapper[4675]: I1121 15:05:43.889199 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7254z" event={"ID":"f60cf358-02ab-4432-b283-ccb3366f9bea","Type":"ContainerStarted","Data":"78d2110e9873453c33bb7085776b9ac75a68095cd9dd242c8fe181cd342a5b5d"} Nov 21 15:05:44 crc kubenswrapper[4675]: I1121 15:05:44.907675 4675 generic.go:334] "Generic (PLEG): container finished" podID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerID="5d355e936a9b11bac65d35d15dd3a16744e5ade03dea06cfbb2613923f9543ad" exitCode=0 Nov 21 15:05:44 crc kubenswrapper[4675]: I1121 15:05:44.907756 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7254z" event={"ID":"f60cf358-02ab-4432-b283-ccb3366f9bea","Type":"ContainerDied","Data":"5d355e936a9b11bac65d35d15dd3a16744e5ade03dea06cfbb2613923f9543ad"} Nov 21 15:05:46 crc kubenswrapper[4675]: I1121 15:05:46.137096 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:05:46 crc kubenswrapper[4675]: I1121 15:05:46.138150 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:05:46 crc kubenswrapper[4675]: I1121 15:05:46.138413 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 15:05:46 crc kubenswrapper[4675]: I1121 15:05:46.139260 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fbb8211188afd058cb0c631c9db5272c834697e9ee63bbb77dff2aea6b76b1be"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:05:46 crc kubenswrapper[4675]: I1121 15:05:46.139324 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://fbb8211188afd058cb0c631c9db5272c834697e9ee63bbb77dff2aea6b76b1be" gracePeriod=600 Nov 21 15:05:46 crc kubenswrapper[4675]: I1121 15:05:46.946452 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="fbb8211188afd058cb0c631c9db5272c834697e9ee63bbb77dff2aea6b76b1be" exitCode=0 Nov 21 15:05:46 crc kubenswrapper[4675]: I1121 15:05:46.946530 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"fbb8211188afd058cb0c631c9db5272c834697e9ee63bbb77dff2aea6b76b1be"} Nov 21 15:05:46 crc kubenswrapper[4675]: I1121 15:05:46.946880 4675 scope.go:117] "RemoveContainer" containerID="4ae6bc82a16381eed2d593313b17e0d53f49ed31eb3663b81a6e48546e2d832a" Nov 21 15:05:47 crc kubenswrapper[4675]: I1121 15:05:47.960728 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752"} Nov 21 15:05:47 crc kubenswrapper[4675]: I1121 15:05:47.963657 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7254z" event={"ID":"f60cf358-02ab-4432-b283-ccb3366f9bea","Type":"ContainerStarted","Data":"546a3bb0728d409c232f9bf3b2044310bd3df8b2ac2e87a9506f2d7e0024165a"} Nov 21 15:05:59 crc kubenswrapper[4675]: I1121 15:05:59.103873 4675 generic.go:334] "Generic (PLEG): container finished" podID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerID="546a3bb0728d409c232f9bf3b2044310bd3df8b2ac2e87a9506f2d7e0024165a" exitCode=0 Nov 21 15:05:59 crc kubenswrapper[4675]: I1121 15:05:59.103961 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7254z" event={"ID":"f60cf358-02ab-4432-b283-ccb3366f9bea","Type":"ContainerDied","Data":"546a3bb0728d409c232f9bf3b2044310bd3df8b2ac2e87a9506f2d7e0024165a"} Nov 21 15:06:00 crc kubenswrapper[4675]: I1121 15:06:00.120557 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7254z" event={"ID":"f60cf358-02ab-4432-b283-ccb3366f9bea","Type":"ContainerStarted","Data":"7bfd72bdeb919f0eabf9e668b8f92fa52af524dc285b30a1075891b74d32f187"} Nov 21 15:06:00 crc kubenswrapper[4675]: I1121 15:06:00.140728 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7254z" podStartSLOduration=3.453312597 podStartE2EDuration="18.140707994s" podCreationTimestamp="2025-11-21 15:05:42 +0000 UTC" firstStartedPulling="2025-11-21 15:05:44.910766264 +0000 UTC m=+5621.637180991" lastFinishedPulling="2025-11-21 15:05:59.598161661 +0000 UTC m=+5636.324576388" observedRunningTime="2025-11-21 15:06:00.138500689 +0000 UTC m=+5636.864915456" watchObservedRunningTime="2025-11-21 15:06:00.140707994 +0000 UTC m=+5636.867122721" Nov 21 15:06:03 crc kubenswrapper[4675]: I1121 15:06:03.345309 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:06:03 crc kubenswrapper[4675]: I1121 15:06:03.347337 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:06:04 crc kubenswrapper[4675]: I1121 15:06:04.393684 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:06:04 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:06:04 crc kubenswrapper[4675]: > Nov 21 15:06:14 crc kubenswrapper[4675]: I1121 15:06:14.403570 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:06:14 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:06:14 crc kubenswrapper[4675]: > Nov 21 15:06:20 crc kubenswrapper[4675]: I1121 15:06:20.781179 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l4gdg/must-gather-g8mkd"] Nov 21 15:06:20 crc kubenswrapper[4675]: I1121 15:06:20.785877 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/must-gather-g8mkd" Nov 21 15:06:20 crc kubenswrapper[4675]: I1121 15:06:20.788440 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-l4gdg"/"openshift-service-ca.crt" Nov 21 15:06:20 crc kubenswrapper[4675]: I1121 15:06:20.788452 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-l4gdg"/"kube-root-ca.crt" Nov 21 15:06:20 crc kubenswrapper[4675]: I1121 15:06:20.805713 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l4gdg/must-gather-g8mkd"] Nov 21 15:06:20 crc kubenswrapper[4675]: I1121 15:06:20.891193 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgvlw\" (UniqueName: \"kubernetes.io/projected/cc98edbf-7660-4c65-adfb-c44fea8df67b-kube-api-access-dgvlw\") pod \"must-gather-g8mkd\" (UID: \"cc98edbf-7660-4c65-adfb-c44fea8df67b\") " pod="openshift-must-gather-l4gdg/must-gather-g8mkd" Nov 21 15:06:20 crc kubenswrapper[4675]: I1121 15:06:20.891309 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cc98edbf-7660-4c65-adfb-c44fea8df67b-must-gather-output\") pod \"must-gather-g8mkd\" (UID: \"cc98edbf-7660-4c65-adfb-c44fea8df67b\") " pod="openshift-must-gather-l4gdg/must-gather-g8mkd" Nov 21 15:06:20 crc kubenswrapper[4675]: I1121 15:06:20.993647 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgvlw\" (UniqueName: \"kubernetes.io/projected/cc98edbf-7660-4c65-adfb-c44fea8df67b-kube-api-access-dgvlw\") pod \"must-gather-g8mkd\" (UID: \"cc98edbf-7660-4c65-adfb-c44fea8df67b\") " pod="openshift-must-gather-l4gdg/must-gather-g8mkd" Nov 21 15:06:20 crc kubenswrapper[4675]: I1121 15:06:20.993752 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cc98edbf-7660-4c65-adfb-c44fea8df67b-must-gather-output\") pod \"must-gather-g8mkd\" (UID: \"cc98edbf-7660-4c65-adfb-c44fea8df67b\") " pod="openshift-must-gather-l4gdg/must-gather-g8mkd" Nov 21 15:06:20 crc kubenswrapper[4675]: I1121 15:06:20.994299 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cc98edbf-7660-4c65-adfb-c44fea8df67b-must-gather-output\") pod \"must-gather-g8mkd\" (UID: \"cc98edbf-7660-4c65-adfb-c44fea8df67b\") " pod="openshift-must-gather-l4gdg/must-gather-g8mkd" Nov 21 15:06:21 crc kubenswrapper[4675]: I1121 15:06:21.030920 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgvlw\" (UniqueName: \"kubernetes.io/projected/cc98edbf-7660-4c65-adfb-c44fea8df67b-kube-api-access-dgvlw\") pod \"must-gather-g8mkd\" (UID: \"cc98edbf-7660-4c65-adfb-c44fea8df67b\") " pod="openshift-must-gather-l4gdg/must-gather-g8mkd" Nov 21 15:06:21 crc kubenswrapper[4675]: I1121 15:06:21.114660 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/must-gather-g8mkd" Nov 21 15:06:21 crc kubenswrapper[4675]: I1121 15:06:21.989991 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l4gdg/must-gather-g8mkd"] Nov 21 15:06:22 crc kubenswrapper[4675]: I1121 15:06:22.353856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4gdg/must-gather-g8mkd" event={"ID":"cc98edbf-7660-4c65-adfb-c44fea8df67b","Type":"ContainerStarted","Data":"050dbdcd393dd06776a8991622c83722574f218ca2889b2773a28b12bf108912"} Nov 21 15:06:24 crc kubenswrapper[4675]: I1121 15:06:24.425297 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:06:24 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:06:24 crc kubenswrapper[4675]: > Nov 21 15:06:34 crc kubenswrapper[4675]: I1121 15:06:34.389088 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:06:34 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:06:34 crc kubenswrapper[4675]: > Nov 21 15:06:35 crc kubenswrapper[4675]: I1121 15:06:35.514806 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4gdg/must-gather-g8mkd" event={"ID":"cc98edbf-7660-4c65-adfb-c44fea8df67b","Type":"ContainerStarted","Data":"5336adc09858118991b6d9f9645bfa15ddfd9822de517a57110b6a79e4a494af"} Nov 21 15:06:36 crc kubenswrapper[4675]: I1121 15:06:36.527380 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4gdg/must-gather-g8mkd" event={"ID":"cc98edbf-7660-4c65-adfb-c44fea8df67b","Type":"ContainerStarted","Data":"ceae17209a173ca64360ebdefa2c69869f9c2543dffa06572b693b41b9752fa5"} Nov 21 15:06:36 crc kubenswrapper[4675]: I1121 15:06:36.543079 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l4gdg/must-gather-g8mkd" podStartSLOduration=3.958023877 podStartE2EDuration="16.543041589s" podCreationTimestamp="2025-11-21 15:06:20 +0000 UTC" firstStartedPulling="2025-11-21 15:06:21.998192283 +0000 UTC m=+5658.724607010" lastFinishedPulling="2025-11-21 15:06:34.583209985 +0000 UTC m=+5671.309624722" observedRunningTime="2025-11-21 15:06:36.539966742 +0000 UTC m=+5673.266381469" watchObservedRunningTime="2025-11-21 15:06:36.543041589 +0000 UTC m=+5673.269456316" Nov 21 15:06:43 crc kubenswrapper[4675]: I1121 15:06:43.894459 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l4gdg/crc-debug-vsdg8"] Nov 21 15:06:43 crc kubenswrapper[4675]: I1121 15:06:43.896656 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" Nov 21 15:06:43 crc kubenswrapper[4675]: I1121 15:06:43.899570 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-l4gdg"/"default-dockercfg-6x87w" Nov 21 15:06:43 crc kubenswrapper[4675]: I1121 15:06:43.982406 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptl8f\" (UniqueName: \"kubernetes.io/projected/5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8-kube-api-access-ptl8f\") pod \"crc-debug-vsdg8\" (UID: \"5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8\") " pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" Nov 21 15:06:43 crc kubenswrapper[4675]: I1121 15:06:43.982485 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8-host\") pod \"crc-debug-vsdg8\" (UID: \"5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8\") " pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" Nov 21 15:06:44 crc kubenswrapper[4675]: I1121 15:06:44.084711 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptl8f\" (UniqueName: \"kubernetes.io/projected/5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8-kube-api-access-ptl8f\") pod \"crc-debug-vsdg8\" (UID: \"5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8\") " pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" Nov 21 15:06:44 crc kubenswrapper[4675]: I1121 15:06:44.084783 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8-host\") pod \"crc-debug-vsdg8\" (UID: \"5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8\") " pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" Nov 21 15:06:44 crc kubenswrapper[4675]: I1121 15:06:44.085008 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8-host\") pod \"crc-debug-vsdg8\" (UID: \"5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8\") " pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" Nov 21 15:06:44 crc kubenswrapper[4675]: I1121 15:06:44.132165 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptl8f\" (UniqueName: \"kubernetes.io/projected/5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8-kube-api-access-ptl8f\") pod \"crc-debug-vsdg8\" (UID: \"5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8\") " pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" Nov 21 15:06:44 crc kubenswrapper[4675]: I1121 15:06:44.224282 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" Nov 21 15:06:44 crc kubenswrapper[4675]: I1121 15:06:44.397970 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:06:44 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:06:44 crc kubenswrapper[4675]: > Nov 21 15:06:44 crc kubenswrapper[4675]: I1121 15:06:44.615467 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" event={"ID":"5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8","Type":"ContainerStarted","Data":"427c9b5aaabe75d51a4e4ab4e8d9731e16fe1147ba2fa9da501152802c52ea8a"} Nov 21 15:06:54 crc kubenswrapper[4675]: I1121 15:06:54.726970 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:06:54 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:06:54 crc kubenswrapper[4675]: > Nov 21 15:07:04 crc kubenswrapper[4675]: I1121 15:07:04.395335 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:07:04 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:07:04 crc kubenswrapper[4675]: > Nov 21 15:07:14 crc kubenswrapper[4675]: I1121 15:07:14.470393 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:07:14 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:07:14 crc kubenswrapper[4675]: > Nov 21 15:07:21 crc kubenswrapper[4675]: I1121 15:07:21.760974 4675 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.079878686s: [/var/lib/containers/storage/overlay/bb60677f88b51ab94c4d3d7db20302882d76dc3dc32e1ab15d291d9699d186ce/diff /var/log/pods/openstack_nova-cell0-conductor-0_18c40348-4d27-4b4c-9b8a-eac9b8b7252a/nova-cell0-conductor-conductor/0.log]; will not log again for this container unless duration exceeds 2s Nov 21 15:07:24 crc kubenswrapper[4675]: I1121 15:07:24.395018 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:07:24 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:07:24 crc kubenswrapper[4675]: > Nov 21 15:07:25 crc kubenswrapper[4675]: E1121 15:07:25.231906 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Nov 21 15:07:25 crc kubenswrapper[4675]: E1121 15:07:25.452396 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptl8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-vsdg8_openshift-must-gather-l4gdg(5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 15:07:25 crc kubenswrapper[4675]: E1121 15:07:25.453867 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" podUID="5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8" Nov 21 15:07:26 crc kubenswrapper[4675]: E1121 15:07:26.071903 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" podUID="5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8" Nov 21 15:07:34 crc kubenswrapper[4675]: I1121 15:07:34.398609 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:07:34 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:07:34 crc kubenswrapper[4675]: > Nov 21 15:07:34 crc kubenswrapper[4675]: I1121 15:07:34.398960 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:07:34 crc kubenswrapper[4675]: I1121 15:07:34.399898 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"7bfd72bdeb919f0eabf9e668b8f92fa52af524dc285b30a1075891b74d32f187"} pod="openshift-marketplace/redhat-operators-7254z" containerMessage="Container registry-server failed startup probe, will be restarted" Nov 21 15:07:34 crc kubenswrapper[4675]: I1121 15:07:34.399934 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" containerID="cri-o://7bfd72bdeb919f0eabf9e668b8f92fa52af524dc285b30a1075891b74d32f187" gracePeriod=30 Nov 21 15:07:43 crc kubenswrapper[4675]: I1121 15:07:43.341197 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" event={"ID":"5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8","Type":"ContainerStarted","Data":"5e2c14db621e74294e8b106e73bc6435ea9d6e506f7bfa745fab54cd4fb57282"} Nov 21 15:07:44 crc kubenswrapper[4675]: I1121 15:07:44.372112 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" podStartSLOduration=2.959872503 podStartE2EDuration="1m1.372091529s" podCreationTimestamp="2025-11-21 15:06:43 +0000 UTC" firstStartedPulling="2025-11-21 15:06:44.3624715 +0000 UTC m=+5681.088886227" lastFinishedPulling="2025-11-21 15:07:42.774690526 +0000 UTC m=+5739.501105253" observedRunningTime="2025-11-21 15:07:44.367710719 +0000 UTC m=+5741.094125446" watchObservedRunningTime="2025-11-21 15:07:44.372091529 +0000 UTC m=+5741.098506256" Nov 21 15:07:56 crc kubenswrapper[4675]: I1121 15:07:56.487358 4675 generic.go:334] "Generic (PLEG): container finished" podID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerID="7bfd72bdeb919f0eabf9e668b8f92fa52af524dc285b30a1075891b74d32f187" exitCode=0 Nov 21 15:07:56 crc kubenswrapper[4675]: I1121 15:07:56.487456 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7254z" event={"ID":"f60cf358-02ab-4432-b283-ccb3366f9bea","Type":"ContainerDied","Data":"7bfd72bdeb919f0eabf9e668b8f92fa52af524dc285b30a1075891b74d32f187"} Nov 21 15:08:03 crc kubenswrapper[4675]: I1121 15:08:03.783643 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="05bf6265-2f8a-4d78-9f5a-05304816937d" containerName="galera" probeResult="failure" output="command timed out" Nov 21 15:08:03 crc kubenswrapper[4675]: I1121 15:08:03.788400 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="05bf6265-2f8a-4d78-9f5a-05304816937d" containerName="galera" probeResult="failure" output="command timed out" Nov 21 15:08:16 crc kubenswrapper[4675]: I1121 15:08:16.136182 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:08:16 crc kubenswrapper[4675]: I1121 15:08:16.136757 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:08:22 crc kubenswrapper[4675]: I1121 15:08:22.774301 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7254z" event={"ID":"f60cf358-02ab-4432-b283-ccb3366f9bea","Type":"ContainerStarted","Data":"1ed8514d3536e2f60a91f8c575a17749115cb4b608bb8444eabaa18d8c4eb7fa"} Nov 21 15:08:33 crc kubenswrapper[4675]: I1121 15:08:33.346988 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:08:33 crc kubenswrapper[4675]: I1121 15:08:33.347503 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:08:34 crc kubenswrapper[4675]: I1121 15:08:34.412684 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:08:34 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:08:34 crc kubenswrapper[4675]: > Nov 21 15:08:45 crc kubenswrapper[4675]: I1121 15:08:45.051614 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:08:45 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:08:45 crc kubenswrapper[4675]: > Nov 21 15:08:46 crc kubenswrapper[4675]: I1121 15:08:46.136673 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:08:46 crc kubenswrapper[4675]: I1121 15:08:46.137059 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:08:54 crc kubenswrapper[4675]: I1121 15:08:54.394169 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:08:54 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:08:54 crc kubenswrapper[4675]: > Nov 21 15:09:04 crc kubenswrapper[4675]: I1121 15:09:04.396426 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:09:04 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:09:04 crc kubenswrapper[4675]: > Nov 21 15:09:09 crc kubenswrapper[4675]: I1121 15:09:09.417027 4675 generic.go:334] "Generic (PLEG): container finished" podID="5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8" containerID="5e2c14db621e74294e8b106e73bc6435ea9d6e506f7bfa745fab54cd4fb57282" exitCode=0 Nov 21 15:09:09 crc kubenswrapper[4675]: I1121 15:09:09.417115 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" event={"ID":"5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8","Type":"ContainerDied","Data":"5e2c14db621e74294e8b106e73bc6435ea9d6e506f7bfa745fab54cd4fb57282"} Nov 21 15:09:10 crc kubenswrapper[4675]: I1121 15:09:10.550683 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" Nov 21 15:09:10 crc kubenswrapper[4675]: I1121 15:09:10.587269 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l4gdg/crc-debug-vsdg8"] Nov 21 15:09:10 crc kubenswrapper[4675]: I1121 15:09:10.597510 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l4gdg/crc-debug-vsdg8"] Nov 21 15:09:10 crc kubenswrapper[4675]: I1121 15:09:10.603971 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8-host\") pod \"5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8\" (UID: \"5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8\") " Nov 21 15:09:10 crc kubenswrapper[4675]: I1121 15:09:10.604041 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptl8f\" (UniqueName: \"kubernetes.io/projected/5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8-kube-api-access-ptl8f\") pod \"5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8\" (UID: \"5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8\") " Nov 21 15:09:10 crc kubenswrapper[4675]: I1121 15:09:10.604112 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8-host" (OuterVolumeSpecName: "host") pod "5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8" (UID: "5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 15:09:10 crc kubenswrapper[4675]: I1121 15:09:10.604948 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8-host\") on node \"crc\" DevicePath \"\"" Nov 21 15:09:10 crc kubenswrapper[4675]: I1121 15:09:10.610346 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8-kube-api-access-ptl8f" (OuterVolumeSpecName: "kube-api-access-ptl8f") pod "5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8" (UID: "5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8"). InnerVolumeSpecName "kube-api-access-ptl8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:09:10 crc kubenswrapper[4675]: I1121 15:09:10.706650 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptl8f\" (UniqueName: \"kubernetes.io/projected/5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8-kube-api-access-ptl8f\") on node \"crc\" DevicePath \"\"" Nov 21 15:09:10 crc kubenswrapper[4675]: I1121 15:09:10.864159 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8" path="/var/lib/kubelet/pods/5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8/volumes" Nov 21 15:09:11 crc kubenswrapper[4675]: I1121 15:09:11.439900 4675 scope.go:117] "RemoveContainer" containerID="5e2c14db621e74294e8b106e73bc6435ea9d6e506f7bfa745fab54cd4fb57282" Nov 21 15:09:11 crc kubenswrapper[4675]: I1121 15:09:11.439922 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/crc-debug-vsdg8" Nov 21 15:09:11 crc kubenswrapper[4675]: I1121 15:09:11.769106 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l4gdg/crc-debug-7tw42"] Nov 21 15:09:11 crc kubenswrapper[4675]: E1121 15:09:11.769710 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8" containerName="container-00" Nov 21 15:09:11 crc kubenswrapper[4675]: I1121 15:09:11.769730 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8" containerName="container-00" Nov 21 15:09:11 crc kubenswrapper[4675]: I1121 15:09:11.770133 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5460cb9e-4fc1-4c91-8a37-6a715dd1cdf8" containerName="container-00" Nov 21 15:09:11 crc kubenswrapper[4675]: I1121 15:09:11.771113 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/crc-debug-7tw42" Nov 21 15:09:11 crc kubenswrapper[4675]: I1121 15:09:11.773582 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-l4gdg"/"default-dockercfg-6x87w" Nov 21 15:09:11 crc kubenswrapper[4675]: I1121 15:09:11.832085 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwcnf\" (UniqueName: \"kubernetes.io/projected/fc662eff-9042-446f-9e7c-9009454abaff-kube-api-access-rwcnf\") pod \"crc-debug-7tw42\" (UID: \"fc662eff-9042-446f-9e7c-9009454abaff\") " pod="openshift-must-gather-l4gdg/crc-debug-7tw42" Nov 21 15:09:11 crc kubenswrapper[4675]: I1121 15:09:11.832329 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc662eff-9042-446f-9e7c-9009454abaff-host\") pod \"crc-debug-7tw42\" (UID: \"fc662eff-9042-446f-9e7c-9009454abaff\") " pod="openshift-must-gather-l4gdg/crc-debug-7tw42" Nov 21 15:09:11 crc kubenswrapper[4675]: I1121 15:09:11.935867 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwcnf\" (UniqueName: \"kubernetes.io/projected/fc662eff-9042-446f-9e7c-9009454abaff-kube-api-access-rwcnf\") pod \"crc-debug-7tw42\" (UID: \"fc662eff-9042-446f-9e7c-9009454abaff\") " pod="openshift-must-gather-l4gdg/crc-debug-7tw42" Nov 21 15:09:11 crc kubenswrapper[4675]: I1121 15:09:11.936522 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc662eff-9042-446f-9e7c-9009454abaff-host\") pod \"crc-debug-7tw42\" (UID: \"fc662eff-9042-446f-9e7c-9009454abaff\") " pod="openshift-must-gather-l4gdg/crc-debug-7tw42" Nov 21 15:09:11 crc kubenswrapper[4675]: I1121 15:09:11.936704 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc662eff-9042-446f-9e7c-9009454abaff-host\") pod \"crc-debug-7tw42\" (UID: \"fc662eff-9042-446f-9e7c-9009454abaff\") " pod="openshift-must-gather-l4gdg/crc-debug-7tw42" Nov 21 15:09:11 crc kubenswrapper[4675]: I1121 15:09:11.956364 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwcnf\" (UniqueName: \"kubernetes.io/projected/fc662eff-9042-446f-9e7c-9009454abaff-kube-api-access-rwcnf\") pod \"crc-debug-7tw42\" (UID: \"fc662eff-9042-446f-9e7c-9009454abaff\") " pod="openshift-must-gather-l4gdg/crc-debug-7tw42" Nov 21 15:09:12 crc kubenswrapper[4675]: I1121 15:09:12.096507 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/crc-debug-7tw42" Nov 21 15:09:12 crc kubenswrapper[4675]: I1121 15:09:12.632160 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4gdg/crc-debug-7tw42" event={"ID":"fc662eff-9042-446f-9e7c-9009454abaff","Type":"ContainerStarted","Data":"6f8b09217ab9248b90fea2809104218e73847ab1a3dca722d9595ca310ae7e61"} Nov 21 15:09:13 crc kubenswrapper[4675]: I1121 15:09:13.646923 4675 generic.go:334] "Generic (PLEG): container finished" podID="fc662eff-9042-446f-9e7c-9009454abaff" containerID="eb193d6f56e3a3971a63b93bb58af8a8f872699e0a009a79036e4f14232eb3fe" exitCode=0 Nov 21 15:09:13 crc kubenswrapper[4675]: I1121 15:09:13.647038 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4gdg/crc-debug-7tw42" event={"ID":"fc662eff-9042-446f-9e7c-9009454abaff","Type":"ContainerDied","Data":"eb193d6f56e3a3971a63b93bb58af8a8f872699e0a009a79036e4f14232eb3fe"} Nov 21 15:09:14 crc kubenswrapper[4675]: I1121 15:09:14.409242 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" probeResult="failure" output=< Nov 21 15:09:14 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:09:14 crc kubenswrapper[4675]: > Nov 21 15:09:14 crc kubenswrapper[4675]: I1121 15:09:14.792649 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/crc-debug-7tw42" Nov 21 15:09:14 crc kubenswrapper[4675]: I1121 15:09:14.912821 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwcnf\" (UniqueName: \"kubernetes.io/projected/fc662eff-9042-446f-9e7c-9009454abaff-kube-api-access-rwcnf\") pod \"fc662eff-9042-446f-9e7c-9009454abaff\" (UID: \"fc662eff-9042-446f-9e7c-9009454abaff\") " Nov 21 15:09:14 crc kubenswrapper[4675]: I1121 15:09:14.912931 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc662eff-9042-446f-9e7c-9009454abaff-host\") pod \"fc662eff-9042-446f-9e7c-9009454abaff\" (UID: \"fc662eff-9042-446f-9e7c-9009454abaff\") " Nov 21 15:09:14 crc kubenswrapper[4675]: I1121 15:09:14.914428 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc662eff-9042-446f-9e7c-9009454abaff-host" (OuterVolumeSpecName: "host") pod "fc662eff-9042-446f-9e7c-9009454abaff" (UID: "fc662eff-9042-446f-9e7c-9009454abaff"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 15:09:14 crc kubenswrapper[4675]: I1121 15:09:14.921234 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc662eff-9042-446f-9e7c-9009454abaff-kube-api-access-rwcnf" (OuterVolumeSpecName: "kube-api-access-rwcnf") pod "fc662eff-9042-446f-9e7c-9009454abaff" (UID: "fc662eff-9042-446f-9e7c-9009454abaff"). InnerVolumeSpecName "kube-api-access-rwcnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:09:15 crc kubenswrapper[4675]: I1121 15:09:15.015970 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwcnf\" (UniqueName: \"kubernetes.io/projected/fc662eff-9042-446f-9e7c-9009454abaff-kube-api-access-rwcnf\") on node \"crc\" DevicePath \"\"" Nov 21 15:09:15 crc kubenswrapper[4675]: I1121 15:09:15.016493 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc662eff-9042-446f-9e7c-9009454abaff-host\") on node \"crc\" DevicePath \"\"" Nov 21 15:09:15 crc kubenswrapper[4675]: I1121 15:09:15.675277 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4gdg/crc-debug-7tw42" event={"ID":"fc662eff-9042-446f-9e7c-9009454abaff","Type":"ContainerDied","Data":"6f8b09217ab9248b90fea2809104218e73847ab1a3dca722d9595ca310ae7e61"} Nov 21 15:09:15 crc kubenswrapper[4675]: I1121 15:09:15.675320 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f8b09217ab9248b90fea2809104218e73847ab1a3dca722d9595ca310ae7e61" Nov 21 15:09:15 crc kubenswrapper[4675]: I1121 15:09:15.675384 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/crc-debug-7tw42" Nov 21 15:09:16 crc kubenswrapper[4675]: I1121 15:09:16.136157 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:09:16 crc kubenswrapper[4675]: I1121 15:09:16.136226 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:09:16 crc kubenswrapper[4675]: I1121 15:09:16.136285 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 15:09:16 crc kubenswrapper[4675]: I1121 15:09:16.137288 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:09:16 crc kubenswrapper[4675]: I1121 15:09:16.137355 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" gracePeriod=600 Nov 21 15:09:16 crc kubenswrapper[4675]: E1121 15:09:16.264207 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:09:16 crc kubenswrapper[4675]: I1121 15:09:16.311484 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l4gdg/crc-debug-7tw42"] Nov 21 15:09:16 crc kubenswrapper[4675]: I1121 15:09:16.330116 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l4gdg/crc-debug-7tw42"] Nov 21 15:09:16 crc kubenswrapper[4675]: I1121 15:09:16.691146 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" exitCode=0 Nov 21 15:09:16 crc kubenswrapper[4675]: I1121 15:09:16.691204 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752"} Nov 21 15:09:16 crc kubenswrapper[4675]: I1121 15:09:16.691251 4675 scope.go:117] "RemoveContainer" containerID="fbb8211188afd058cb0c631c9db5272c834697e9ee63bbb77dff2aea6b76b1be" Nov 21 15:09:16 crc kubenswrapper[4675]: I1121 15:09:16.692283 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:09:16 crc kubenswrapper[4675]: E1121 15:09:16.692644 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:09:16 crc kubenswrapper[4675]: I1121 15:09:16.865479 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc662eff-9042-446f-9e7c-9009454abaff" path="/var/lib/kubelet/pods/fc662eff-9042-446f-9e7c-9009454abaff/volumes" Nov 21 15:09:17 crc kubenswrapper[4675]: I1121 15:09:17.464470 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l4gdg/crc-debug-nx2zn"] Nov 21 15:09:17 crc kubenswrapper[4675]: E1121 15:09:17.465180 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc662eff-9042-446f-9e7c-9009454abaff" containerName="container-00" Nov 21 15:09:17 crc kubenswrapper[4675]: I1121 15:09:17.465193 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc662eff-9042-446f-9e7c-9009454abaff" containerName="container-00" Nov 21 15:09:17 crc kubenswrapper[4675]: I1121 15:09:17.465427 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc662eff-9042-446f-9e7c-9009454abaff" containerName="container-00" Nov 21 15:09:17 crc kubenswrapper[4675]: I1121 15:09:17.466190 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/crc-debug-nx2zn" Nov 21 15:09:17 crc kubenswrapper[4675]: I1121 15:09:17.468204 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-l4gdg"/"default-dockercfg-6x87w" Nov 21 15:09:17 crc kubenswrapper[4675]: I1121 15:09:17.574600 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0ab92e0e-080b-4554-8608-a26ff174b646-host\") pod \"crc-debug-nx2zn\" (UID: \"0ab92e0e-080b-4554-8608-a26ff174b646\") " pod="openshift-must-gather-l4gdg/crc-debug-nx2zn" Nov 21 15:09:17 crc kubenswrapper[4675]: I1121 15:09:17.574657 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pjcl\" (UniqueName: \"kubernetes.io/projected/0ab92e0e-080b-4554-8608-a26ff174b646-kube-api-access-6pjcl\") pod \"crc-debug-nx2zn\" (UID: \"0ab92e0e-080b-4554-8608-a26ff174b646\") " pod="openshift-must-gather-l4gdg/crc-debug-nx2zn" Nov 21 15:09:17 crc kubenswrapper[4675]: I1121 15:09:17.677735 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0ab92e0e-080b-4554-8608-a26ff174b646-host\") pod \"crc-debug-nx2zn\" (UID: \"0ab92e0e-080b-4554-8608-a26ff174b646\") " pod="openshift-must-gather-l4gdg/crc-debug-nx2zn" Nov 21 15:09:17 crc kubenswrapper[4675]: I1121 15:09:17.677809 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pjcl\" (UniqueName: \"kubernetes.io/projected/0ab92e0e-080b-4554-8608-a26ff174b646-kube-api-access-6pjcl\") pod \"crc-debug-nx2zn\" (UID: \"0ab92e0e-080b-4554-8608-a26ff174b646\") " pod="openshift-must-gather-l4gdg/crc-debug-nx2zn" Nov 21 15:09:17 crc kubenswrapper[4675]: I1121 15:09:17.677828 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0ab92e0e-080b-4554-8608-a26ff174b646-host\") pod \"crc-debug-nx2zn\" (UID: \"0ab92e0e-080b-4554-8608-a26ff174b646\") " pod="openshift-must-gather-l4gdg/crc-debug-nx2zn" Nov 21 15:09:17 crc kubenswrapper[4675]: I1121 15:09:17.701793 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pjcl\" (UniqueName: \"kubernetes.io/projected/0ab92e0e-080b-4554-8608-a26ff174b646-kube-api-access-6pjcl\") pod \"crc-debug-nx2zn\" (UID: \"0ab92e0e-080b-4554-8608-a26ff174b646\") " pod="openshift-must-gather-l4gdg/crc-debug-nx2zn" Nov 21 15:09:17 crc kubenswrapper[4675]: I1121 15:09:17.784539 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/crc-debug-nx2zn" Nov 21 15:09:18 crc kubenswrapper[4675]: I1121 15:09:18.721159 4675 generic.go:334] "Generic (PLEG): container finished" podID="0ab92e0e-080b-4554-8608-a26ff174b646" containerID="39ebbc3af4034a2db2f75bc1d7050b40b5ed22f400450aead87a32265b531146" exitCode=0 Nov 21 15:09:18 crc kubenswrapper[4675]: I1121 15:09:18.721272 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4gdg/crc-debug-nx2zn" event={"ID":"0ab92e0e-080b-4554-8608-a26ff174b646","Type":"ContainerDied","Data":"39ebbc3af4034a2db2f75bc1d7050b40b5ed22f400450aead87a32265b531146"} Nov 21 15:09:18 crc kubenswrapper[4675]: I1121 15:09:18.721865 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4gdg/crc-debug-nx2zn" event={"ID":"0ab92e0e-080b-4554-8608-a26ff174b646","Type":"ContainerStarted","Data":"0fc2fb62626f224c9f9a199bd5c3f77b323da013415d89dc90ba6289ee14b7af"} Nov 21 15:09:18 crc kubenswrapper[4675]: I1121 15:09:18.765938 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l4gdg/crc-debug-nx2zn"] Nov 21 15:09:18 crc kubenswrapper[4675]: I1121 15:09:18.777927 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l4gdg/crc-debug-nx2zn"] Nov 21 15:09:20 crc kubenswrapper[4675]: I1121 15:09:20.009533 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/crc-debug-nx2zn" Nov 21 15:09:20 crc kubenswrapper[4675]: I1121 15:09:20.038595 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pjcl\" (UniqueName: \"kubernetes.io/projected/0ab92e0e-080b-4554-8608-a26ff174b646-kube-api-access-6pjcl\") pod \"0ab92e0e-080b-4554-8608-a26ff174b646\" (UID: \"0ab92e0e-080b-4554-8608-a26ff174b646\") " Nov 21 15:09:20 crc kubenswrapper[4675]: I1121 15:09:20.038875 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0ab92e0e-080b-4554-8608-a26ff174b646-host\") pod \"0ab92e0e-080b-4554-8608-a26ff174b646\" (UID: \"0ab92e0e-080b-4554-8608-a26ff174b646\") " Nov 21 15:09:20 crc kubenswrapper[4675]: I1121 15:09:20.038989 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ab92e0e-080b-4554-8608-a26ff174b646-host" (OuterVolumeSpecName: "host") pod "0ab92e0e-080b-4554-8608-a26ff174b646" (UID: "0ab92e0e-080b-4554-8608-a26ff174b646"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 15:09:20 crc kubenswrapper[4675]: I1121 15:09:20.039438 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0ab92e0e-080b-4554-8608-a26ff174b646-host\") on node \"crc\" DevicePath \"\"" Nov 21 15:09:20 crc kubenswrapper[4675]: I1121 15:09:20.078986 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab92e0e-080b-4554-8608-a26ff174b646-kube-api-access-6pjcl" (OuterVolumeSpecName: "kube-api-access-6pjcl") pod "0ab92e0e-080b-4554-8608-a26ff174b646" (UID: "0ab92e0e-080b-4554-8608-a26ff174b646"). InnerVolumeSpecName "kube-api-access-6pjcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:09:20 crc kubenswrapper[4675]: I1121 15:09:20.142478 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pjcl\" (UniqueName: \"kubernetes.io/projected/0ab92e0e-080b-4554-8608-a26ff174b646-kube-api-access-6pjcl\") on node \"crc\" DevicePath \"\"" Nov 21 15:09:20 crc kubenswrapper[4675]: I1121 15:09:20.743846 4675 scope.go:117] "RemoveContainer" containerID="39ebbc3af4034a2db2f75bc1d7050b40b5ed22f400450aead87a32265b531146" Nov 21 15:09:20 crc kubenswrapper[4675]: I1121 15:09:20.743906 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/crc-debug-nx2zn" Nov 21 15:09:20 crc kubenswrapper[4675]: I1121 15:09:20.863861 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab92e0e-080b-4554-8608-a26ff174b646" path="/var/lib/kubelet/pods/0ab92e0e-080b-4554-8608-a26ff174b646/volumes" Nov 21 15:09:23 crc kubenswrapper[4675]: I1121 15:09:23.403308 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:09:23 crc kubenswrapper[4675]: I1121 15:09:23.469954 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:09:23 crc kubenswrapper[4675]: I1121 15:09:23.642140 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7254z"] Nov 21 15:09:24 crc kubenswrapper[4675]: I1121 15:09:24.798017 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7254z" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" containerID="cri-o://1ed8514d3536e2f60a91f8c575a17749115cb4b608bb8444eabaa18d8c4eb7fa" gracePeriod=2 Nov 21 15:09:25 crc kubenswrapper[4675]: I1121 15:09:25.812822 4675 generic.go:334] "Generic (PLEG): container finished" podID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerID="1ed8514d3536e2f60a91f8c575a17749115cb4b608bb8444eabaa18d8c4eb7fa" exitCode=0 Nov 21 15:09:25 crc kubenswrapper[4675]: I1121 15:09:25.812902 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7254z" event={"ID":"f60cf358-02ab-4432-b283-ccb3366f9bea","Type":"ContainerDied","Data":"1ed8514d3536e2f60a91f8c575a17749115cb4b608bb8444eabaa18d8c4eb7fa"} Nov 21 15:09:25 crc kubenswrapper[4675]: I1121 15:09:25.813431 4675 scope.go:117] "RemoveContainer" containerID="7bfd72bdeb919f0eabf9e668b8f92fa52af524dc285b30a1075891b74d32f187" Nov 21 15:09:25 crc kubenswrapper[4675]: I1121 15:09:25.929506 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:09:25 crc kubenswrapper[4675]: I1121 15:09:25.998099 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60cf358-02ab-4432-b283-ccb3366f9bea-utilities\") pod \"f60cf358-02ab-4432-b283-ccb3366f9bea\" (UID: \"f60cf358-02ab-4432-b283-ccb3366f9bea\") " Nov 21 15:09:25 crc kubenswrapper[4675]: I1121 15:09:25.998402 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60cf358-02ab-4432-b283-ccb3366f9bea-catalog-content\") pod \"f60cf358-02ab-4432-b283-ccb3366f9bea\" (UID: \"f60cf358-02ab-4432-b283-ccb3366f9bea\") " Nov 21 15:09:25 crc kubenswrapper[4675]: I1121 15:09:25.998531 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f60cf358-02ab-4432-b283-ccb3366f9bea-utilities" (OuterVolumeSpecName: "utilities") pod "f60cf358-02ab-4432-b283-ccb3366f9bea" (UID: "f60cf358-02ab-4432-b283-ccb3366f9bea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:09:25 crc kubenswrapper[4675]: I1121 15:09:25.998660 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgx9x\" (UniqueName: \"kubernetes.io/projected/f60cf358-02ab-4432-b283-ccb3366f9bea-kube-api-access-mgx9x\") pod \"f60cf358-02ab-4432-b283-ccb3366f9bea\" (UID: \"f60cf358-02ab-4432-b283-ccb3366f9bea\") " Nov 21 15:09:25 crc kubenswrapper[4675]: I1121 15:09:25.999330 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60cf358-02ab-4432-b283-ccb3366f9bea-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:09:26 crc kubenswrapper[4675]: I1121 15:09:26.006289 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60cf358-02ab-4432-b283-ccb3366f9bea-kube-api-access-mgx9x" (OuterVolumeSpecName: "kube-api-access-mgx9x") pod "f60cf358-02ab-4432-b283-ccb3366f9bea" (UID: "f60cf358-02ab-4432-b283-ccb3366f9bea"). InnerVolumeSpecName "kube-api-access-mgx9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:09:26 crc kubenswrapper[4675]: I1121 15:09:26.101685 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgx9x\" (UniqueName: \"kubernetes.io/projected/f60cf358-02ab-4432-b283-ccb3366f9bea-kube-api-access-mgx9x\") on node \"crc\" DevicePath \"\"" Nov 21 15:09:26 crc kubenswrapper[4675]: I1121 15:09:26.103433 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f60cf358-02ab-4432-b283-ccb3366f9bea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f60cf358-02ab-4432-b283-ccb3366f9bea" (UID: "f60cf358-02ab-4432-b283-ccb3366f9bea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:09:26 crc kubenswrapper[4675]: I1121 15:09:26.204838 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60cf358-02ab-4432-b283-ccb3366f9bea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:09:26 crc kubenswrapper[4675]: I1121 15:09:26.827430 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7254z" event={"ID":"f60cf358-02ab-4432-b283-ccb3366f9bea","Type":"ContainerDied","Data":"78d2110e9873453c33bb7085776b9ac75a68095cd9dd242c8fe181cd342a5b5d"} Nov 21 15:09:26 crc kubenswrapper[4675]: I1121 15:09:26.827495 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7254z" Nov 21 15:09:26 crc kubenswrapper[4675]: I1121 15:09:26.828507 4675 scope.go:117] "RemoveContainer" containerID="1ed8514d3536e2f60a91f8c575a17749115cb4b608bb8444eabaa18d8c4eb7fa" Nov 21 15:09:26 crc kubenswrapper[4675]: I1121 15:09:26.870859 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7254z"] Nov 21 15:09:26 crc kubenswrapper[4675]: I1121 15:09:26.874013 4675 scope.go:117] "RemoveContainer" containerID="546a3bb0728d409c232f9bf3b2044310bd3df8b2ac2e87a9506f2d7e0024165a" Nov 21 15:09:26 crc kubenswrapper[4675]: I1121 15:09:26.883104 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7254z"] Nov 21 15:09:26 crc kubenswrapper[4675]: I1121 15:09:26.899681 4675 scope.go:117] "RemoveContainer" containerID="5d355e936a9b11bac65d35d15dd3a16744e5ade03dea06cfbb2613923f9543ad" Nov 21 15:09:27 crc kubenswrapper[4675]: I1121 15:09:27.849736 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:09:27 crc kubenswrapper[4675]: E1121 15:09:27.850472 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:09:28 crc kubenswrapper[4675]: I1121 15:09:28.871003 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" path="/var/lib/kubelet/pods/f60cf358-02ab-4432-b283-ccb3366f9bea/volumes" Nov 21 15:09:38 crc kubenswrapper[4675]: I1121 15:09:38.850375 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:09:38 crc kubenswrapper[4675]: E1121 15:09:38.851334 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:09:45 crc kubenswrapper[4675]: I1121 15:09:45.473250 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e1986ade-c95f-42c9-9ae4-8518e89cb7b8/aodh-api/0.log" Nov 21 15:09:45 crc kubenswrapper[4675]: I1121 15:09:45.625452 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e1986ade-c95f-42c9-9ae4-8518e89cb7b8/aodh-listener/0.log" Nov 21 15:09:45 crc kubenswrapper[4675]: I1121 15:09:45.668605 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e1986ade-c95f-42c9-9ae4-8518e89cb7b8/aodh-notifier/0.log" Nov 21 15:09:45 crc kubenswrapper[4675]: I1121 15:09:45.727132 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e1986ade-c95f-42c9-9ae4-8518e89cb7b8/aodh-evaluator/0.log" Nov 21 15:09:45 crc kubenswrapper[4675]: I1121 15:09:45.853763 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85bfdcc858-xc95t_370d8c4c-811f-4e1e-b801-828d8fa5d1c2/barbican-api/0.log" Nov 21 15:09:45 crc kubenswrapper[4675]: I1121 15:09:45.932458 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85bfdcc858-xc95t_370d8c4c-811f-4e1e-b801-828d8fa5d1c2/barbican-api-log/0.log" Nov 21 15:09:46 crc kubenswrapper[4675]: I1121 15:09:46.046915 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-57c587945d-p7z7g_521461a2-7f1f-43b2-8ff9-be3a054e25f6/barbican-keystone-listener/0.log" Nov 21 15:09:46 crc kubenswrapper[4675]: I1121 15:09:46.186894 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-57c587945d-p7z7g_521461a2-7f1f-43b2-8ff9-be3a054e25f6/barbican-keystone-listener-log/0.log" Nov 21 15:09:46 crc kubenswrapper[4675]: I1121 15:09:46.223286 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6998886fc9-xdttj_d4328076-0e3d-40b2-b686-502e7f263a2c/barbican-worker/0.log" Nov 21 15:09:46 crc kubenswrapper[4675]: I1121 15:09:46.252432 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6998886fc9-xdttj_d4328076-0e3d-40b2-b686-502e7f263a2c/barbican-worker-log/0.log" Nov 21 15:09:46 crc kubenswrapper[4675]: I1121 15:09:46.379447 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf_fb55d1ca-c721-4bca-9a73-e01fa4da2008/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:46 crc kubenswrapper[4675]: I1121 15:09:46.543871 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d34d8b0b-08da-4455-b70a-e4a7a4dff526/ceilometer-central-agent/0.log" Nov 21 15:09:46 crc kubenswrapper[4675]: I1121 15:09:46.663534 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d34d8b0b-08da-4455-b70a-e4a7a4dff526/ceilometer-notification-agent/0.log" Nov 21 15:09:46 crc kubenswrapper[4675]: I1121 15:09:46.712733 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d34d8b0b-08da-4455-b70a-e4a7a4dff526/proxy-httpd/0.log" Nov 21 15:09:46 crc kubenswrapper[4675]: I1121 15:09:46.715309 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d34d8b0b-08da-4455-b70a-e4a7a4dff526/sg-core/0.log" Nov 21 15:09:47 crc kubenswrapper[4675]: I1121 15:09:47.032626 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_64cafc2c-04de-4090-9026-2b986fcae86a/cinder-api-log/0.log" Nov 21 15:09:47 crc kubenswrapper[4675]: I1121 15:09:47.074849 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2c8951ed-3fad-45f7-ab94-b1843d1c4114/cinder-scheduler/1.log" Nov 21 15:09:47 crc kubenswrapper[4675]: I1121 15:09:47.098196 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_64cafc2c-04de-4090-9026-2b986fcae86a/cinder-api/0.log" Nov 21 15:09:47 crc kubenswrapper[4675]: I1121 15:09:47.210023 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2c8951ed-3fad-45f7-ab94-b1843d1c4114/cinder-scheduler/0.log" Nov 21 15:09:47 crc kubenswrapper[4675]: I1121 15:09:47.305777 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2c8951ed-3fad-45f7-ab94-b1843d1c4114/probe/0.log" Nov 21 15:09:47 crc kubenswrapper[4675]: I1121 15:09:47.339672 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-tn87w_7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:47 crc kubenswrapper[4675]: I1121 15:09:47.590181 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm_e542a2fc-0fd2-49fa-873e-1d580edd93d4/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:47 crc kubenswrapper[4675]: I1121 15:09:47.593495 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-hsxmx_2d80929c-c14e-4ec5-943f-de21d45af551/init/0.log" Nov 21 15:09:47 crc kubenswrapper[4675]: I1121 15:09:47.896392 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9_a3366490-72da-4662-a609-d3fd320bac49/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:47 crc kubenswrapper[4675]: I1121 15:09:47.899472 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-hsxmx_2d80929c-c14e-4ec5-943f-de21d45af551/init/0.log" Nov 21 15:09:47 crc kubenswrapper[4675]: I1121 15:09:47.937359 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-hsxmx_2d80929c-c14e-4ec5-943f-de21d45af551/dnsmasq-dns/0.log" Nov 21 15:09:48 crc kubenswrapper[4675]: I1121 15:09:48.175418 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc/glance-log/0.log" Nov 21 15:09:48 crc kubenswrapper[4675]: I1121 15:09:48.211729 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc/glance-httpd/0.log" Nov 21 15:09:48 crc kubenswrapper[4675]: I1121 15:09:48.331940 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e08a8ae1-1033-4b31-89df-b85614075cbf/glance-httpd/0.log" Nov 21 15:09:48 crc kubenswrapper[4675]: I1121 15:09:48.448930 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e08a8ae1-1033-4b31-89df-b85614075cbf/glance-log/0.log" Nov 21 15:09:49 crc kubenswrapper[4675]: I1121 15:09:49.117221 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-77f4868784-6nk2h_148b5a1d-39fe-4a33-88ee-97b3383595ff/heat-api/0.log" Nov 21 15:09:49 crc kubenswrapper[4675]: I1121 15:09:49.130290 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7f8454c7d4-x6h7x_a48b13f7-e6d5-448e-b83e-be3b66c31fb0/heat-engine/0.log" Nov 21 15:09:49 crc kubenswrapper[4675]: I1121 15:09:49.217724 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-l69j6_cfe1a316-0dad-402c-b056-2302e5fe219a/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:49 crc kubenswrapper[4675]: I1121 15:09:49.476353 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5f97697d96-mp958_5bdb7df7-dd8e-4fea-9634-65fa6f741de8/heat-cfnapi/0.log" Nov 21 15:09:49 crc kubenswrapper[4675]: I1121 15:09:49.533357 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8tsm2_ec8162e7-cc12-48eb-982d-036b866eaeb0/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:49 crc kubenswrapper[4675]: I1121 15:09:49.771463 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29395561-scxxf_6a16764c-944a-48be-ba08-7b46b89ffdba/keystone-cron/0.log" Nov 21 15:09:49 crc kubenswrapper[4675]: I1121 15:09:49.844327 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29395621-7g6lb_06506868-9f56-4ca3-870a-bf6062173504/keystone-cron/0.log" Nov 21 15:09:49 crc kubenswrapper[4675]: I1121 15:09:49.984046 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a/kube-state-metrics/0.log" Nov 21 15:09:50 crc kubenswrapper[4675]: I1121 15:09:50.002213 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7ff9b4b9fd-shbm9_f494051d-de96-4044-a28d-3b05672b5a66/keystone-api/0.log" Nov 21 15:09:50 crc kubenswrapper[4675]: I1121 15:09:50.158716 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh_31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:50 crc kubenswrapper[4675]: I1121 15:09:50.236213 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-mkqjd_88b1961a-032d-40c5-83f7-602511b7808e/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:50 crc kubenswrapper[4675]: I1121 15:09:50.514029 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_151c5400-b481-4494-aacd-020595cc112c/mysqld-exporter/0.log" Nov 21 15:09:50 crc kubenswrapper[4675]: I1121 15:09:50.774167 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dbf4b8f9c-tnln2_7d084a12-d301-4ea1-b049-ca6211a8929d/neutron-httpd/0.log" Nov 21 15:09:50 crc kubenswrapper[4675]: I1121 15:09:50.864017 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dbf4b8f9c-tnln2_7d084a12-d301-4ea1-b049-ca6211a8929d/neutron-api/0.log" Nov 21 15:09:51 crc kubenswrapper[4675]: I1121 15:09:51.309920 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz_50a8108e-2cd1-42e7-9efe-5c2478adb797/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:51 crc kubenswrapper[4675]: I1121 15:09:51.722350 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_18c40348-4d27-4b4c-9b8a-eac9b8b7252a/nova-cell0-conductor-conductor/0.log" Nov 21 15:09:51 crc kubenswrapper[4675]: I1121 15:09:51.865855 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5bfa0c26-ff80-4079-aef8-6cc1a62ba554/nova-api-log/0.log" Nov 21 15:09:52 crc kubenswrapper[4675]: I1121 15:09:52.047889 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a2816c5b-51b2-4542-b0ff-cdc5bb61c948/nova-cell1-conductor-conductor/0.log" Nov 21 15:09:52 crc kubenswrapper[4675]: I1121 15:09:52.282054 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5bfa0c26-ff80-4079-aef8-6cc1a62ba554/nova-api-api/0.log" Nov 21 15:09:52 crc kubenswrapper[4675]: I1121 15:09:52.663540 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_25ba2d9f-d85c-403d-b7e6-8b17f48e4316/nova-cell1-novncproxy-novncproxy/0.log" Nov 21 15:09:52 crc kubenswrapper[4675]: I1121 15:09:52.837688 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-vlb78_2205f0b5-339c-4165-84fd-9c9f117d757f/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:52 crc kubenswrapper[4675]: I1121 15:09:52.848914 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:09:52 crc kubenswrapper[4675]: E1121 15:09:52.849220 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:09:53 crc kubenswrapper[4675]: I1121 15:09:53.100903 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8ae83905-939b-4ae5-bab9-993356ce17b8/memcached/0.log" Nov 21 15:09:53 crc kubenswrapper[4675]: I1121 15:09:53.304807 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f2e69762-ea6c-4d7a-a407-8373c1c7b734/nova-metadata-log/0.log" Nov 21 15:09:53 crc kubenswrapper[4675]: I1121 15:09:53.435610 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a675b127-f342-4527-b0f1-9e668fcf5ede/nova-scheduler-scheduler/0.log" Nov 21 15:09:53 crc kubenswrapper[4675]: I1121 15:09:53.552303 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c9adb63e-74d2-48f6-b639-4b22def78e35/mysql-bootstrap/0.log" Nov 21 15:09:53 crc kubenswrapper[4675]: I1121 15:09:53.839111 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c9adb63e-74d2-48f6-b639-4b22def78e35/mysql-bootstrap/0.log" Nov 21 15:09:53 crc kubenswrapper[4675]: I1121 15:09:53.865103 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c9adb63e-74d2-48f6-b639-4b22def78e35/galera/0.log" Nov 21 15:09:53 crc kubenswrapper[4675]: I1121 15:09:53.877239 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_05bf6265-2f8a-4d78-9f5a-05304816937d/mysql-bootstrap/0.log" Nov 21 15:09:54 crc kubenswrapper[4675]: I1121 15:09:54.113283 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_05bf6265-2f8a-4d78-9f5a-05304816937d/galera/0.log" Nov 21 15:09:54 crc kubenswrapper[4675]: I1121 15:09:54.126046 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_05bf6265-2f8a-4d78-9f5a-05304816937d/mysql-bootstrap/0.log" Nov 21 15:09:54 crc kubenswrapper[4675]: I1121 15:09:54.188656 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3b6ec2a5-ea89-459f-b66c-4822e68f1498/openstackclient/0.log" Nov 21 15:09:54 crc kubenswrapper[4675]: I1121 15:09:54.400212 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-l5r9b_4a15b97a-aa41-4d4d-8f75-0b3d2193eded/ovn-controller/0.log" Nov 21 15:09:54 crc kubenswrapper[4675]: I1121 15:09:54.432861 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xc9np_c1c50a5b-1bd7-4c2a-9424-770e8170212e/openstack-network-exporter/0.log" Nov 21 15:09:54 crc kubenswrapper[4675]: I1121 15:09:54.687864 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-j7prf_90977e3d-e36b-4b13-b7f8-f98a6fdc56bc/ovsdb-server-init/0.log" Nov 21 15:09:54 crc kubenswrapper[4675]: I1121 15:09:54.739499 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f2e69762-ea6c-4d7a-a407-8373c1c7b734/nova-metadata-metadata/0.log" Nov 21 15:09:54 crc kubenswrapper[4675]: I1121 15:09:54.833493 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-j7prf_90977e3d-e36b-4b13-b7f8-f98a6fdc56bc/ovsdb-server-init/0.log" Nov 21 15:09:54 crc kubenswrapper[4675]: I1121 15:09:54.866221 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-j7prf_90977e3d-e36b-4b13-b7f8-f98a6fdc56bc/ovs-vswitchd/0.log" Nov 21 15:09:54 crc kubenswrapper[4675]: I1121 15:09:54.885824 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-j7prf_90977e3d-e36b-4b13-b7f8-f98a6fdc56bc/ovsdb-server/0.log" Nov 21 15:09:54 crc kubenswrapper[4675]: I1121 15:09:54.978526 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-rvbj7_cd5e1c55-691e-40cc-9e53-b905864402fb/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:55 crc kubenswrapper[4675]: I1121 15:09:55.033231 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e5d93705-ae99-48ab-99e3-1e225f06ab6e/openstack-network-exporter/0.log" Nov 21 15:09:55 crc kubenswrapper[4675]: I1121 15:09:55.079117 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e5d93705-ae99-48ab-99e3-1e225f06ab6e/ovn-northd/0.log" Nov 21 15:09:55 crc kubenswrapper[4675]: I1121 15:09:55.172826 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b1a22076-aa43-4fe3-83ad-1a3e22d3abc7/openstack-network-exporter/0.log" Nov 21 15:09:55 crc kubenswrapper[4675]: I1121 15:09:55.272606 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b1a22076-aa43-4fe3-83ad-1a3e22d3abc7/ovsdbserver-nb/0.log" Nov 21 15:09:55 crc kubenswrapper[4675]: I1121 15:09:55.309970 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b5c900d8-26df-4201-9693-318f45bb93d8/openstack-network-exporter/0.log" Nov 21 15:09:55 crc kubenswrapper[4675]: I1121 15:09:55.384395 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b5c900d8-26df-4201-9693-318f45bb93d8/ovsdbserver-sb/0.log" Nov 21 15:09:55 crc kubenswrapper[4675]: I1121 15:09:55.569698 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64487ff74-sfh5j_2ff36b45-ea40-44fd-84fe-dc732a5af439/placement-api/0.log" Nov 21 15:09:55 crc kubenswrapper[4675]: I1121 15:09:55.603338 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79/init-config-reloader/0.log" Nov 21 15:09:55 crc kubenswrapper[4675]: I1121 15:09:55.634786 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64487ff74-sfh5j_2ff36b45-ea40-44fd-84fe-dc732a5af439/placement-log/0.log" Nov 21 15:09:55 crc kubenswrapper[4675]: I1121 15:09:55.901538 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79/init-config-reloader/0.log" Nov 21 15:09:55 crc kubenswrapper[4675]: I1121 15:09:55.913507 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79/thanos-sidecar/0.log" Nov 21 15:09:55 crc kubenswrapper[4675]: I1121 15:09:55.916842 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79/prometheus/0.log" Nov 21 15:09:55 crc kubenswrapper[4675]: I1121 15:09:55.929682 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79/config-reloader/0.log" Nov 21 15:09:56 crc kubenswrapper[4675]: I1121 15:09:56.092549 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6b2ab3dd-83aa-4d37-8f44-bb3d277932fb/setup-container/0.log" Nov 21 15:09:56 crc kubenswrapper[4675]: I1121 15:09:56.242591 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6b2ab3dd-83aa-4d37-8f44-bb3d277932fb/setup-container/0.log" Nov 21 15:09:56 crc kubenswrapper[4675]: I1121 15:09:56.254739 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6b2ab3dd-83aa-4d37-8f44-bb3d277932fb/rabbitmq/0.log" Nov 21 15:09:56 crc kubenswrapper[4675]: I1121 15:09:56.300488 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a5ef674f-8b42-40b1-ba1a-fa2d68858b31/setup-container/0.log" Nov 21 15:09:56 crc kubenswrapper[4675]: I1121 15:09:56.665209 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a5ef674f-8b42-40b1-ba1a-fa2d68858b31/setup-container/0.log" Nov 21 15:09:56 crc kubenswrapper[4675]: I1121 15:09:56.685104 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22_ea3b33b3-6f01-403c-87ed-3c0727db2a97/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:56 crc kubenswrapper[4675]: I1121 15:09:56.696117 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a5ef674f-8b42-40b1-ba1a-fa2d68858b31/rabbitmq/0.log" Nov 21 15:09:56 crc kubenswrapper[4675]: I1121 15:09:56.885359 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-2xt7s_021eb0fa-a9a9-4af1-bc66-8b868fa3c41c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:56 crc kubenswrapper[4675]: I1121 15:09:56.887814 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp_de094806-84d9-4903-be6c-c00e33b1e782/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:56 crc kubenswrapper[4675]: I1121 15:09:56.927174 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-rblk7_d47864c9-0269-47a5-b718-bce3541df7c5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:57 crc kubenswrapper[4675]: I1121 15:09:57.108133 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-dpdgb_7c34957b-d4df-4448-9396-9e7244dc85b5/ssh-known-hosts-edpm-deployment/0.log" Nov 21 15:09:57 crc kubenswrapper[4675]: I1121 15:09:57.199462 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5cb8d89bd7-jsjdv_35b58484-6cb2-4edc-bea9-4d3a8d6b1479/proxy-server/0.log" Nov 21 15:09:57 crc kubenswrapper[4675]: I1121 15:09:57.316343 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-6s7pj_e01d9dde-a9f3-4efc-8997-bf3914cffde9/swift-ring-rebalance/0.log" Nov 21 15:09:57 crc kubenswrapper[4675]: I1121 15:09:57.326357 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5cb8d89bd7-jsjdv_35b58484-6cb2-4edc-bea9-4d3a8d6b1479/proxy-httpd/0.log" Nov 21 15:09:57 crc kubenswrapper[4675]: I1121 15:09:57.441665 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/account-auditor/0.log" Nov 21 15:09:57 crc kubenswrapper[4675]: I1121 15:09:57.514679 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/account-reaper/0.log" Nov 21 15:09:57 crc kubenswrapper[4675]: I1121 15:09:57.552674 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/account-replicator/0.log" Nov 21 15:09:57 crc kubenswrapper[4675]: I1121 15:09:57.631469 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/account-server/0.log" Nov 21 15:09:57 crc kubenswrapper[4675]: I1121 15:09:57.662992 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/container-auditor/0.log" Nov 21 15:09:57 crc kubenswrapper[4675]: I1121 15:09:57.758304 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/container-replicator/0.log" Nov 21 15:09:57 crc kubenswrapper[4675]: I1121 15:09:57.815875 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/container-updater/0.log" Nov 21 15:09:57 crc kubenswrapper[4675]: I1121 15:09:57.858628 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/container-server/0.log" Nov 21 15:09:57 crc kubenswrapper[4675]: I1121 15:09:57.897714 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/object-expirer/0.log" Nov 21 15:09:57 crc kubenswrapper[4675]: I1121 15:09:57.912274 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/object-auditor/0.log" Nov 21 15:09:58 crc kubenswrapper[4675]: I1121 15:09:58.017291 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/object-replicator/0.log" Nov 21 15:09:58 crc kubenswrapper[4675]: I1121 15:09:58.058905 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/object-server/0.log" Nov 21 15:09:58 crc kubenswrapper[4675]: I1121 15:09:58.097020 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/object-updater/0.log" Nov 21 15:09:58 crc kubenswrapper[4675]: I1121 15:09:58.132120 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/swift-recon-cron/0.log" Nov 21 15:09:58 crc kubenswrapper[4675]: I1121 15:09:58.136755 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/rsync/0.log" Nov 21 15:09:58 crc kubenswrapper[4675]: I1121 15:09:58.345371 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-pqp25_6068452e-fbc5-44c6-8141-d3b8b3de6f92/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:58 crc kubenswrapper[4675]: I1121 15:09:58.437810 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg_087ded3f-0cd4-4471-b0b8-f23a7de03a26/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:58 crc kubenswrapper[4675]: I1121 15:09:58.625863 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_bdd9ab23-ad3b-4d5e-8f26-6c6cf175fa94/test-operator-logs-container/0.log" Nov 21 15:09:58 crc kubenswrapper[4675]: I1121 15:09:58.663984 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-t67kg_045679ad-48d6-48ed-a9a5-8699cc283733/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:09:58 crc kubenswrapper[4675]: I1121 15:09:58.875871 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_71faa523-7927-4fc1-bb12-0f787758620a/tempest-tests-tempest-tests-runner/0.log" Nov 21 15:10:06 crc kubenswrapper[4675]: I1121 15:10:06.849189 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:10:06 crc kubenswrapper[4675]: E1121 15:10:06.850094 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:10:20 crc kubenswrapper[4675]: I1121 15:10:20.849643 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:10:20 crc kubenswrapper[4675]: E1121 15:10:20.850618 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:10:25 crc kubenswrapper[4675]: I1121 15:10:25.061684 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n_4ebf20ac-e131-4e83-8493-aab35b1f206a/util/0.log" Nov 21 15:10:25 crc kubenswrapper[4675]: I1121 15:10:25.223575 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n_4ebf20ac-e131-4e83-8493-aab35b1f206a/pull/0.log" Nov 21 15:10:25 crc kubenswrapper[4675]: I1121 15:10:25.223697 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n_4ebf20ac-e131-4e83-8493-aab35b1f206a/pull/0.log" Nov 21 15:10:25 crc kubenswrapper[4675]: I1121 15:10:25.250711 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n_4ebf20ac-e131-4e83-8493-aab35b1f206a/util/0.log" Nov 21 15:10:25 crc kubenswrapper[4675]: I1121 15:10:25.466338 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n_4ebf20ac-e131-4e83-8493-aab35b1f206a/pull/0.log" Nov 21 15:10:25 crc kubenswrapper[4675]: I1121 15:10:25.543308 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n_4ebf20ac-e131-4e83-8493-aab35b1f206a/extract/0.log" Nov 21 15:10:25 crc kubenswrapper[4675]: I1121 15:10:25.555559 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n_4ebf20ac-e131-4e83-8493-aab35b1f206a/util/0.log" Nov 21 15:10:25 crc kubenswrapper[4675]: I1121 15:10:25.663238 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-k4lls_f3631bac-6fa8-4ad8-bbad-df880af19292/kube-rbac-proxy/0.log" Nov 21 15:10:25 crc kubenswrapper[4675]: I1121 15:10:25.772749 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-k4lls_f3631bac-6fa8-4ad8-bbad-df880af19292/manager/0.log" Nov 21 15:10:25 crc kubenswrapper[4675]: I1121 15:10:25.782548 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-h9twj_7e3588ab-94d7-482f-97c4-67d573181e2c/kube-rbac-proxy/0.log" Nov 21 15:10:25 crc kubenswrapper[4675]: I1121 15:10:25.965959 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-h9twj_7e3588ab-94d7-482f-97c4-67d573181e2c/manager/0.log" Nov 21 15:10:26 crc kubenswrapper[4675]: I1121 15:10:26.009471 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-dd9xn_45c2d5a9-a319-4012-91de-77769b6ad913/manager/0.log" Nov 21 15:10:26 crc kubenswrapper[4675]: I1121 15:10:26.012902 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-dd9xn_45c2d5a9-a319-4012-91de-77769b6ad913/kube-rbac-proxy/0.log" Nov 21 15:10:26 crc kubenswrapper[4675]: I1121 15:10:26.248449 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-726dt_26fa3df8-f4d3-44d1-8e9b-c20dca446570/kube-rbac-proxy/0.log" Nov 21 15:10:26 crc kubenswrapper[4675]: I1121 15:10:26.356447 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-726dt_26fa3df8-f4d3-44d1-8e9b-c20dca446570/manager/0.log" Nov 21 15:10:26 crc kubenswrapper[4675]: I1121 15:10:26.570453 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-4gf9h_2808d52f-0a70-48df-9b55-052faa81f93c/kube-rbac-proxy/0.log" Nov 21 15:10:26 crc kubenswrapper[4675]: I1121 15:10:26.738156 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-hld5f_a3c92b8e-62bc-4b54-b1ce-a32c275cd9ca/kube-rbac-proxy/0.log" Nov 21 15:10:26 crc kubenswrapper[4675]: I1121 15:10:26.814662 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-4gf9h_2808d52f-0a70-48df-9b55-052faa81f93c/manager/0.log" Nov 21 15:10:26 crc kubenswrapper[4675]: I1121 15:10:26.819229 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-hld5f_a3c92b8e-62bc-4b54-b1ce-a32c275cd9ca/manager/0.log" Nov 21 15:10:26 crc kubenswrapper[4675]: I1121 15:10:26.958095 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-qb7zq_43da63e0-75a0-4e90-9e81-3b3be38a45b1/kube-rbac-proxy/0.log" Nov 21 15:10:27 crc kubenswrapper[4675]: I1121 15:10:27.126249 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-kxfcm_2edf1cc1-0bd0-4329-969f-c2890b507972/kube-rbac-proxy/0.log" Nov 21 15:10:27 crc kubenswrapper[4675]: I1121 15:10:27.214969 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-kxfcm_2edf1cc1-0bd0-4329-969f-c2890b507972/manager/0.log" Nov 21 15:10:27 crc kubenswrapper[4675]: I1121 15:10:27.221718 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-qb7zq_43da63e0-75a0-4e90-9e81-3b3be38a45b1/manager/0.log" Nov 21 15:10:27 crc kubenswrapper[4675]: I1121 15:10:27.365977 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-lmpfg_89aec3aa-b2d8-4702-b0fd-005c6d51c669/kube-rbac-proxy/0.log" Nov 21 15:10:27 crc kubenswrapper[4675]: I1121 15:10:27.439912 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-lf6g7_cc7542b4-b4d9-46e5-8819-784ec50c9c11/kube-rbac-proxy/0.log" Nov 21 15:10:27 crc kubenswrapper[4675]: I1121 15:10:27.488890 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-lmpfg_89aec3aa-b2d8-4702-b0fd-005c6d51c669/manager/0.log" Nov 21 15:10:27 crc kubenswrapper[4675]: I1121 15:10:27.596157 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-lf6g7_cc7542b4-b4d9-46e5-8819-784ec50c9c11/manager/0.log" Nov 21 15:10:27 crc kubenswrapper[4675]: I1121 15:10:27.678059 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-p4r2x_eb3d3afa-eaa5-4271-8a33-45a009a9742a/kube-rbac-proxy/0.log" Nov 21 15:10:27 crc kubenswrapper[4675]: I1121 15:10:27.736692 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-p4r2x_eb3d3afa-eaa5-4271-8a33-45a009a9742a/manager/0.log" Nov 21 15:10:28 crc kubenswrapper[4675]: I1121 15:10:28.014790 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-46782_27a202b7-1cf0-4dda-a010-6d59fbe881ed/kube-rbac-proxy/0.log" Nov 21 15:10:28 crc kubenswrapper[4675]: I1121 15:10:28.054747 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-46782_27a202b7-1cf0-4dda-a010-6d59fbe881ed/manager/0.log" Nov 21 15:10:28 crc kubenswrapper[4675]: I1121 15:10:28.122278 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-nmtt7_693e699a-cdc4-4282-9ba6-6947c3e42726/kube-rbac-proxy/0.log" Nov 21 15:10:28 crc kubenswrapper[4675]: I1121 15:10:28.194787 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-nmtt7_693e699a-cdc4-4282-9ba6-6947c3e42726/manager/0.log" Nov 21 15:10:28 crc kubenswrapper[4675]: I1121 15:10:28.300421 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-bhcrz_e8768c70-accf-460e-a781-b5d9eff26f2e/kube-rbac-proxy/0.log" Nov 21 15:10:28 crc kubenswrapper[4675]: I1121 15:10:28.325348 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-bhcrz_e8768c70-accf-460e-a781-b5d9eff26f2e/manager/0.log" Nov 21 15:10:28 crc kubenswrapper[4675]: I1121 15:10:28.442567 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-cwz25_77644f3e-1a90-4f49-a43c-b3d5b23c8184/kube-rbac-proxy/0.log" Nov 21 15:10:28 crc kubenswrapper[4675]: I1121 15:10:28.445893 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-cwz25_77644f3e-1a90-4f49-a43c-b3d5b23c8184/manager/0.log" Nov 21 15:10:28 crc kubenswrapper[4675]: I1121 15:10:28.769559 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7bc9ddc77b-27rxv_867fb4c9-f1c0-49da-9a71-3372347fe4f2/operator/0.log" Nov 21 15:10:28 crc kubenswrapper[4675]: I1121 15:10:28.785892 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4lmmn_d570523a-f2e0-4913-a405-ac5b8582b059/registry-server/0.log" Nov 21 15:10:28 crc kubenswrapper[4675]: I1121 15:10:28.916283 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-p2fwv_0aaf2b35-164b-400a-ad78-84961c2a599c/kube-rbac-proxy/0.log" Nov 21 15:10:29 crc kubenswrapper[4675]: I1121 15:10:29.091731 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-p2fwv_0aaf2b35-164b-400a-ad78-84961c2a599c/manager/0.log" Nov 21 15:10:29 crc kubenswrapper[4675]: I1121 15:10:29.185400 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-dcswk_5661f60c-1801-419e-abaa-7f5e0825f148/kube-rbac-proxy/0.log" Nov 21 15:10:29 crc kubenswrapper[4675]: I1121 15:10:29.201570 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-dcswk_5661f60c-1801-419e-abaa-7f5e0825f148/manager/0.log" Nov 21 15:10:29 crc kubenswrapper[4675]: I1121 15:10:29.424473 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-65k5k_fec54436-1bcf-4e1e-af27-d86372b07bbe/operator/0.log" Nov 21 15:10:29 crc kubenswrapper[4675]: I1121 15:10:29.434545 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-rpmxk_e5d16474-89e2-4e35-8339-24afbb962e4b/kube-rbac-proxy/0.log" Nov 21 15:10:29 crc kubenswrapper[4675]: I1121 15:10:29.592432 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-rpmxk_e5d16474-89e2-4e35-8339-24afbb962e4b/manager/0.log" Nov 21 15:10:29 crc kubenswrapper[4675]: I1121 15:10:29.668351 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7fc59d4bfd-8swxd_cfb97bdd-e357-475c-ab5c-184e50acb0dc/kube-rbac-proxy/0.log" Nov 21 15:10:29 crc kubenswrapper[4675]: I1121 15:10:29.867237 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-2l49b_dd5c12ec-ee26-458d-85f3-2b6bd7c021f1/kube-rbac-proxy/0.log" Nov 21 15:10:29 crc kubenswrapper[4675]: I1121 15:10:29.963014 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-2l49b_dd5c12ec-ee26-458d-85f3-2b6bd7c021f1/manager/0.log" Nov 21 15:10:30 crc kubenswrapper[4675]: I1121 15:10:30.037672 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7fc59d4bfd-8swxd_cfb97bdd-e357-475c-ab5c-184e50acb0dc/manager/0.log" Nov 21 15:10:30 crc kubenswrapper[4675]: I1121 15:10:30.124462 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-79fb5496bb-5zhcc_376edcff-4439-418a-80e3-6f6309cdb8f0/manager/0.log" Nov 21 15:10:30 crc kubenswrapper[4675]: I1121 15:10:30.146233 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-55qzb_fc4f9f2a-5093-4df0-919f-037e57993a93/kube-rbac-proxy/0.log" Nov 21 15:10:30 crc kubenswrapper[4675]: I1121 15:10:30.184428 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-55qzb_fc4f9f2a-5093-4df0-919f-037e57993a93/manager/0.log" Nov 21 15:10:32 crc kubenswrapper[4675]: I1121 15:10:32.849118 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:10:32 crc kubenswrapper[4675]: E1121 15:10:32.851573 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:10:45 crc kubenswrapper[4675]: I1121 15:10:45.849156 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:10:45 crc kubenswrapper[4675]: E1121 15:10:45.850171 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:10:48 crc kubenswrapper[4675]: I1121 15:10:48.101372 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7wq8v_90a5318c-96de-40ae-a8f4-87241ab72f28/control-plane-machine-set-operator/0.log" Nov 21 15:10:48 crc kubenswrapper[4675]: I1121 15:10:48.321670 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gwz5c_eb5a0d6b-3347-4d29-90a5-f554c65e5ddb/kube-rbac-proxy/0.log" Nov 21 15:10:48 crc kubenswrapper[4675]: I1121 15:10:48.344184 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gwz5c_eb5a0d6b-3347-4d29-90a5-f554c65e5ddb/machine-api-operator/0.log" Nov 21 15:10:58 crc kubenswrapper[4675]: I1121 15:10:58.849893 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:10:58 crc kubenswrapper[4675]: E1121 15:10:58.852239 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:10:59 crc kubenswrapper[4675]: I1121 15:10:59.904375 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-2c6xq_5ffe60a3-3d75-49c3-9340-0680d558e18b/cert-manager-controller/0.log" Nov 21 15:11:00 crc kubenswrapper[4675]: I1121 15:11:00.075785 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-bqb55_054749e0-ba55-43d1-a8d0-3cca3a0b15cf/cert-manager-cainjector/0.log" Nov 21 15:11:00 crc kubenswrapper[4675]: I1121 15:11:00.106393 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-r47qh_0ba7a123-c240-4c8d-bd82-974e63a888cf/cert-manager-webhook/0.log" Nov 21 15:11:11 crc kubenswrapper[4675]: I1121 15:11:11.837234 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-crd2x_4ad50a02-1502-4a0b-8f49-32988242ec6b/nmstate-console-plugin/0.log" Nov 21 15:11:12 crc kubenswrapper[4675]: I1121 15:11:12.041197 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qvplj_de0ab26e-1ac3-48eb-9647-f55c0249b9ec/nmstate-handler/0.log" Nov 21 15:11:12 crc kubenswrapper[4675]: I1121 15:11:12.087825 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-4m8jq_32581066-5208-499f-8473-d7002fd31dca/nmstate-metrics/0.log" Nov 21 15:11:12 crc kubenswrapper[4675]: I1121 15:11:12.108408 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-4m8jq_32581066-5208-499f-8473-d7002fd31dca/kube-rbac-proxy/0.log" Nov 21 15:11:12 crc kubenswrapper[4675]: I1121 15:11:12.294682 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-2hlgb_83e5bca9-014a-43f4-8b6b-f4a4052ed662/nmstate-operator/0.log" Nov 21 15:11:12 crc kubenswrapper[4675]: I1121 15:11:12.309190 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-9ztsm_f304a655-1aaa-43a3-81c1-32e5214c02cf/nmstate-webhook/0.log" Nov 21 15:11:12 crc kubenswrapper[4675]: I1121 15:11:12.849307 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:11:12 crc kubenswrapper[4675]: E1121 15:11:12.849924 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:11:24 crc kubenswrapper[4675]: I1121 15:11:24.247881 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-58d5765bd4-29h67_e7c19bc7-9927-4cb7-98e2-2f834e3ff496/kube-rbac-proxy/0.log" Nov 21 15:11:24 crc kubenswrapper[4675]: I1121 15:11:24.396313 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-58d5765bd4-29h67_e7c19bc7-9927-4cb7-98e2-2f834e3ff496/manager/0.log" Nov 21 15:11:27 crc kubenswrapper[4675]: I1121 15:11:27.849765 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:11:27 crc kubenswrapper[4675]: E1121 15:11:27.850732 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:11:33 crc kubenswrapper[4675]: I1121 15:11:33.360743 4675 scope.go:117] "RemoveContainer" containerID="b370d08adfe3f1cc82f52a4012b93dd591a6815da811f7669eb186c8aea90fc5" Nov 21 15:11:33 crc kubenswrapper[4675]: I1121 15:11:33.405925 4675 scope.go:117] "RemoveContainer" containerID="37781fd59d1f2b1eea998ec839c17742db164f62a5aa7498f91f221eed0b99f9" Nov 21 15:11:33 crc kubenswrapper[4675]: I1121 15:11:33.447993 4675 scope.go:117] "RemoveContainer" containerID="9ffab81ac5c7095ada512fb4f8479201389d10d520d5ee10fe479f5a8edfcd8f" Nov 21 15:11:38 crc kubenswrapper[4675]: I1121 15:11:38.676839 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-5blzb_91cca7cf-0a78-48e9-80ca-7d7c7e93d0da/cluster-logging-operator/0.log" Nov 21 15:11:38 crc kubenswrapper[4675]: I1121 15:11:38.865777 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-4nf2w_bbab1657-ebec-4d72-92b4-765a9fb4bd21/collector/0.log" Nov 21 15:11:38 crc kubenswrapper[4675]: I1121 15:11:38.940458 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_69149637-8974-41a7-b494-3db4c647e9de/loki-compactor/0.log" Nov 21 15:11:39 crc kubenswrapper[4675]: I1121 15:11:39.176117 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6b7bc6b4d8-mpc5k_d8711ff7-1164-4f51-9748-d563536a90d3/gateway/0.log" Nov 21 15:11:39 crc kubenswrapper[4675]: I1121 15:11:39.182129 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-czh7n_15313a35-1860-458c-9520-8eb44937ad1d/loki-distributor/0.log" Nov 21 15:11:39 crc kubenswrapper[4675]: I1121 15:11:39.269931 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6b7bc6b4d8-mpc5k_d8711ff7-1164-4f51-9748-d563536a90d3/opa/0.log" Nov 21 15:11:39 crc kubenswrapper[4675]: I1121 15:11:39.359504 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6b7bc6b4d8-pflft_3c26c6d8-717f-4d7d-9a42-bdb65213fe5c/gateway/0.log" Nov 21 15:11:39 crc kubenswrapper[4675]: I1121 15:11:39.384630 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6b7bc6b4d8-pflft_3c26c6d8-717f-4d7d-9a42-bdb65213fe5c/opa/0.log" Nov 21 15:11:39 crc kubenswrapper[4675]: I1121 15:11:39.570707 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_99ed9558-944c-4917-9daf-657bc7f2cbf1/loki-index-gateway/0.log" Nov 21 15:11:39 crc kubenswrapper[4675]: I1121 15:11:39.679898 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_f7b8a2bc-e416-4521-9fa2-44dd6bd69400/loki-ingester/0.log" Nov 21 15:11:39 crc kubenswrapper[4675]: I1121 15:11:39.834343 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-tdn52_fcc0cd18-60e3-4d70-8504-a0987a0cea4e/loki-querier/0.log" Nov 21 15:11:40 crc kubenswrapper[4675]: I1121 15:11:40.102208 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-65gdq_c804a918-f222-49f2-87b7-14b0dae0d37f/loki-query-frontend/0.log" Nov 21 15:11:41 crc kubenswrapper[4675]: I1121 15:11:41.850451 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:11:41 crc kubenswrapper[4675]: E1121 15:11:41.851411 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:11:52 crc kubenswrapper[4675]: I1121 15:11:52.849183 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:11:52 crc kubenswrapper[4675]: E1121 15:11:52.849898 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:11:54 crc kubenswrapper[4675]: I1121 15:11:54.746832 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-krt5f_b076ee09-1376-4f8e-a15f-0b42e2b163d2/kube-rbac-proxy/0.log" Nov 21 15:11:54 crc kubenswrapper[4675]: I1121 15:11:54.941909 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-krt5f_b076ee09-1376-4f8e-a15f-0b42e2b163d2/controller/0.log" Nov 21 15:11:54 crc kubenswrapper[4675]: I1121 15:11:54.994034 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-frr-files/0.log" Nov 21 15:11:55 crc kubenswrapper[4675]: I1121 15:11:55.177266 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-reloader/0.log" Nov 21 15:11:55 crc kubenswrapper[4675]: I1121 15:11:55.220302 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-reloader/0.log" Nov 21 15:11:55 crc kubenswrapper[4675]: I1121 15:11:55.247163 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-frr-files/0.log" Nov 21 15:11:55 crc kubenswrapper[4675]: I1121 15:11:55.298162 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-metrics/0.log" Nov 21 15:11:55 crc kubenswrapper[4675]: I1121 15:11:55.477866 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-frr-files/0.log" Nov 21 15:11:55 crc kubenswrapper[4675]: I1121 15:11:55.500939 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-metrics/0.log" Nov 21 15:11:55 crc kubenswrapper[4675]: I1121 15:11:55.534010 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-metrics/0.log" Nov 21 15:11:55 crc kubenswrapper[4675]: I1121 15:11:55.547227 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-reloader/0.log" Nov 21 15:11:55 crc kubenswrapper[4675]: I1121 15:11:55.719155 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-frr-files/0.log" Nov 21 15:11:55 crc kubenswrapper[4675]: I1121 15:11:55.753668 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-reloader/0.log" Nov 21 15:11:55 crc kubenswrapper[4675]: I1121 15:11:55.801618 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/controller/0.log" Nov 21 15:11:55 crc kubenswrapper[4675]: I1121 15:11:55.808882 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-metrics/0.log" Nov 21 15:11:55 crc kubenswrapper[4675]: I1121 15:11:55.967836 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/frr-metrics/0.log" Nov 21 15:11:56 crc kubenswrapper[4675]: I1121 15:11:56.057340 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/kube-rbac-proxy/0.log" Nov 21 15:11:56 crc kubenswrapper[4675]: I1121 15:11:56.091973 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/kube-rbac-proxy-frr/0.log" Nov 21 15:11:56 crc kubenswrapper[4675]: I1121 15:11:56.202753 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/reloader/0.log" Nov 21 15:11:56 crc kubenswrapper[4675]: I1121 15:11:56.318919 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-p7bcc_046b1803-3201-4c23-bb9d-2cca261bdda0/frr-k8s-webhook-server/0.log" Nov 21 15:11:56 crc kubenswrapper[4675]: I1121 15:11:56.593102 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-b6c7d7f4-57m9x_c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd/manager/0.log" Nov 21 15:11:56 crc kubenswrapper[4675]: I1121 15:11:56.685279 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6bbc7fcc74-d58sq_21f16da1-dc0f-421b-b6f2-13c658268ae7/webhook-server/0.log" Nov 21 15:11:56 crc kubenswrapper[4675]: I1121 15:11:56.875513 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-plc79_22f88730-5c3f-4c5d-a223-be8170e96588/kube-rbac-proxy/0.log" Nov 21 15:11:57 crc kubenswrapper[4675]: I1121 15:11:57.485159 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-plc79_22f88730-5c3f-4c5d-a223-be8170e96588/speaker/0.log" Nov 21 15:11:57 crc kubenswrapper[4675]: I1121 15:11:57.930892 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/frr/0.log" Nov 21 15:12:06 crc kubenswrapper[4675]: I1121 15:12:06.850439 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:12:06 crc kubenswrapper[4675]: E1121 15:12:06.851422 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:12:09 crc kubenswrapper[4675]: I1121 15:12:09.412252 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv_7fcb22a1-888b-416d-b2fe-22eb8cdc928b/util/0.log" Nov 21 15:12:09 crc kubenswrapper[4675]: I1121 15:12:09.632467 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv_7fcb22a1-888b-416d-b2fe-22eb8cdc928b/pull/0.log" Nov 21 15:12:09 crc kubenswrapper[4675]: I1121 15:12:09.646743 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv_7fcb22a1-888b-416d-b2fe-22eb8cdc928b/util/0.log" Nov 21 15:12:09 crc kubenswrapper[4675]: I1121 15:12:09.649233 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv_7fcb22a1-888b-416d-b2fe-22eb8cdc928b/pull/0.log" Nov 21 15:12:09 crc kubenswrapper[4675]: I1121 15:12:09.844060 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv_7fcb22a1-888b-416d-b2fe-22eb8cdc928b/pull/0.log" Nov 21 15:12:09 crc kubenswrapper[4675]: I1121 15:12:09.857257 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv_7fcb22a1-888b-416d-b2fe-22eb8cdc928b/extract/0.log" Nov 21 15:12:09 crc kubenswrapper[4675]: I1121 15:12:09.882727 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv_7fcb22a1-888b-416d-b2fe-22eb8cdc928b/util/0.log" Nov 21 15:12:10 crc kubenswrapper[4675]: I1121 15:12:10.008273 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn_54bee570-2720-4507-89bf-23f1095205a2/util/0.log" Nov 21 15:12:10 crc kubenswrapper[4675]: I1121 15:12:10.332817 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn_54bee570-2720-4507-89bf-23f1095205a2/util/0.log" Nov 21 15:12:10 crc kubenswrapper[4675]: I1121 15:12:10.340531 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn_54bee570-2720-4507-89bf-23f1095205a2/pull/0.log" Nov 21 15:12:10 crc kubenswrapper[4675]: I1121 15:12:10.355187 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn_54bee570-2720-4507-89bf-23f1095205a2/pull/0.log" Nov 21 15:12:10 crc kubenswrapper[4675]: I1121 15:12:10.555804 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn_54bee570-2720-4507-89bf-23f1095205a2/util/0.log" Nov 21 15:12:10 crc kubenswrapper[4675]: I1121 15:12:10.561941 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn_54bee570-2720-4507-89bf-23f1095205a2/pull/0.log" Nov 21 15:12:10 crc kubenswrapper[4675]: I1121 15:12:10.576931 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn_54bee570-2720-4507-89bf-23f1095205a2/extract/0.log" Nov 21 15:12:10 crc kubenswrapper[4675]: I1121 15:12:10.744250 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn_4268fd55-2e1a-4ce3-b168-61ae292f22b9/util/0.log" Nov 21 15:12:10 crc kubenswrapper[4675]: I1121 15:12:10.890571 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn_4268fd55-2e1a-4ce3-b168-61ae292f22b9/pull/0.log" Nov 21 15:12:10 crc kubenswrapper[4675]: I1121 15:12:10.914601 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn_4268fd55-2e1a-4ce3-b168-61ae292f22b9/util/0.log" Nov 21 15:12:10 crc kubenswrapper[4675]: I1121 15:12:10.916274 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn_4268fd55-2e1a-4ce3-b168-61ae292f22b9/pull/0.log" Nov 21 15:12:11 crc kubenswrapper[4675]: I1121 15:12:11.086702 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn_4268fd55-2e1a-4ce3-b168-61ae292f22b9/util/0.log" Nov 21 15:12:11 crc kubenswrapper[4675]: I1121 15:12:11.119300 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn_4268fd55-2e1a-4ce3-b168-61ae292f22b9/pull/0.log" Nov 21 15:12:11 crc kubenswrapper[4675]: I1121 15:12:11.151728 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn_4268fd55-2e1a-4ce3-b168-61ae292f22b9/extract/0.log" Nov 21 15:12:11 crc kubenswrapper[4675]: I1121 15:12:11.267627 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9_c495bdbc-57d1-4c92-8276-2769d303f189/util/0.log" Nov 21 15:12:11 crc kubenswrapper[4675]: I1121 15:12:11.404521 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9_c495bdbc-57d1-4c92-8276-2769d303f189/pull/0.log" Nov 21 15:12:11 crc kubenswrapper[4675]: I1121 15:12:11.415110 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9_c495bdbc-57d1-4c92-8276-2769d303f189/util/0.log" Nov 21 15:12:11 crc kubenswrapper[4675]: I1121 15:12:11.449894 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9_c495bdbc-57d1-4c92-8276-2769d303f189/pull/0.log" Nov 21 15:12:11 crc kubenswrapper[4675]: I1121 15:12:11.633261 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9_c495bdbc-57d1-4c92-8276-2769d303f189/pull/0.log" Nov 21 15:12:11 crc kubenswrapper[4675]: I1121 15:12:11.633809 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9_c495bdbc-57d1-4c92-8276-2769d303f189/util/0.log" Nov 21 15:12:11 crc kubenswrapper[4675]: I1121 15:12:11.638383 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9_c495bdbc-57d1-4c92-8276-2769d303f189/extract/0.log" Nov 21 15:12:11 crc kubenswrapper[4675]: I1121 15:12:11.806414 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tmbz4_da963da7-38b3-45bf-88ba-54b6e6b9a58f/extract-utilities/0.log" Nov 21 15:12:11 crc kubenswrapper[4675]: I1121 15:12:11.980340 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tmbz4_da963da7-38b3-45bf-88ba-54b6e6b9a58f/extract-utilities/0.log" Nov 21 15:12:11 crc kubenswrapper[4675]: I1121 15:12:11.985478 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tmbz4_da963da7-38b3-45bf-88ba-54b6e6b9a58f/extract-content/0.log" Nov 21 15:12:12 crc kubenswrapper[4675]: I1121 15:12:12.020441 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tmbz4_da963da7-38b3-45bf-88ba-54b6e6b9a58f/extract-content/0.log" Nov 21 15:12:12 crc kubenswrapper[4675]: I1121 15:12:12.210342 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tmbz4_da963da7-38b3-45bf-88ba-54b6e6b9a58f/extract-content/0.log" Nov 21 15:12:12 crc kubenswrapper[4675]: I1121 15:12:12.225995 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tmbz4_da963da7-38b3-45bf-88ba-54b6e6b9a58f/extract-utilities/0.log" Nov 21 15:12:12 crc kubenswrapper[4675]: I1121 15:12:12.416259 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pwjwv_1c7213f9-3076-4d37-9803-1156edec2aaa/extract-utilities/0.log" Nov 21 15:12:12 crc kubenswrapper[4675]: I1121 15:12:12.658313 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pwjwv_1c7213f9-3076-4d37-9803-1156edec2aaa/extract-utilities/0.log" Nov 21 15:12:12 crc kubenswrapper[4675]: I1121 15:12:12.685407 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pwjwv_1c7213f9-3076-4d37-9803-1156edec2aaa/extract-content/0.log" Nov 21 15:12:12 crc kubenswrapper[4675]: I1121 15:12:12.728038 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pwjwv_1c7213f9-3076-4d37-9803-1156edec2aaa/extract-content/0.log" Nov 21 15:12:12 crc kubenswrapper[4675]: I1121 15:12:12.893286 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pwjwv_1c7213f9-3076-4d37-9803-1156edec2aaa/extract-content/0.log" Nov 21 15:12:12 crc kubenswrapper[4675]: I1121 15:12:12.955425 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pwjwv_1c7213f9-3076-4d37-9803-1156edec2aaa/extract-utilities/0.log" Nov 21 15:12:12 crc kubenswrapper[4675]: I1121 15:12:12.958617 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tmbz4_da963da7-38b3-45bf-88ba-54b6e6b9a58f/registry-server/0.log" Nov 21 15:12:13 crc kubenswrapper[4675]: I1121 15:12:13.153286 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8_f5500da3-4dcd-4802-86a8-f473843eebe4/util/0.log" Nov 21 15:12:13 crc kubenswrapper[4675]: I1121 15:12:13.385780 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8_f5500da3-4dcd-4802-86a8-f473843eebe4/pull/0.log" Nov 21 15:12:13 crc kubenswrapper[4675]: I1121 15:12:13.413927 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8_f5500da3-4dcd-4802-86a8-f473843eebe4/pull/0.log" Nov 21 15:12:13 crc kubenswrapper[4675]: I1121 15:12:13.414204 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8_f5500da3-4dcd-4802-86a8-f473843eebe4/util/0.log" Nov 21 15:12:13 crc kubenswrapper[4675]: I1121 15:12:13.529729 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pwjwv_1c7213f9-3076-4d37-9803-1156edec2aaa/registry-server/0.log" Nov 21 15:12:13 crc kubenswrapper[4675]: I1121 15:12:13.592019 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8_f5500da3-4dcd-4802-86a8-f473843eebe4/extract/0.log" Nov 21 15:12:13 crc kubenswrapper[4675]: I1121 15:12:13.616297 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8_f5500da3-4dcd-4802-86a8-f473843eebe4/pull/0.log" Nov 21 15:12:13 crc kubenswrapper[4675]: I1121 15:12:13.637739 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8_f5500da3-4dcd-4802-86a8-f473843eebe4/util/0.log" Nov 21 15:12:13 crc kubenswrapper[4675]: I1121 15:12:13.724909 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6q9sj_6a21daba-95a0-4f20-91b5-de4dc44aa0b1/marketplace-operator/0.log" Nov 21 15:12:13 crc kubenswrapper[4675]: I1121 15:12:13.770926 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pk6br_1d54f5b5-d5db-4104-9f8b-072086f8f9a4/extract-utilities/0.log" Nov 21 15:12:13 crc kubenswrapper[4675]: I1121 15:12:13.973742 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pk6br_1d54f5b5-d5db-4104-9f8b-072086f8f9a4/extract-content/0.log" Nov 21 15:12:13 crc kubenswrapper[4675]: I1121 15:12:13.978316 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pk6br_1d54f5b5-d5db-4104-9f8b-072086f8f9a4/extract-utilities/0.log" Nov 21 15:12:13 crc kubenswrapper[4675]: I1121 15:12:13.980609 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pk6br_1d54f5b5-d5db-4104-9f8b-072086f8f9a4/extract-content/0.log" Nov 21 15:12:14 crc kubenswrapper[4675]: I1121 15:12:14.138268 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pk6br_1d54f5b5-d5db-4104-9f8b-072086f8f9a4/extract-content/0.log" Nov 21 15:12:14 crc kubenswrapper[4675]: I1121 15:12:14.161469 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pk6br_1d54f5b5-d5db-4104-9f8b-072086f8f9a4/extract-utilities/0.log" Nov 21 15:12:14 crc kubenswrapper[4675]: I1121 15:12:14.223602 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zm72j_3f6b3f8e-0776-47f2-bbe4-ed0d6af49813/extract-utilities/0.log" Nov 21 15:12:14 crc kubenswrapper[4675]: I1121 15:12:14.370095 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pk6br_1d54f5b5-d5db-4104-9f8b-072086f8f9a4/registry-server/0.log" Nov 21 15:12:14 crc kubenswrapper[4675]: I1121 15:12:14.412418 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zm72j_3f6b3f8e-0776-47f2-bbe4-ed0d6af49813/extract-utilities/0.log" Nov 21 15:12:14 crc kubenswrapper[4675]: I1121 15:12:14.444408 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zm72j_3f6b3f8e-0776-47f2-bbe4-ed0d6af49813/extract-content/0.log" Nov 21 15:12:14 crc kubenswrapper[4675]: I1121 15:12:14.493560 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zm72j_3f6b3f8e-0776-47f2-bbe4-ed0d6af49813/extract-content/0.log" Nov 21 15:12:14 crc kubenswrapper[4675]: I1121 15:12:14.633673 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zm72j_3f6b3f8e-0776-47f2-bbe4-ed0d6af49813/extract-content/0.log" Nov 21 15:12:14 crc kubenswrapper[4675]: I1121 15:12:14.645550 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zm72j_3f6b3f8e-0776-47f2-bbe4-ed0d6af49813/extract-utilities/0.log" Nov 21 15:12:15 crc kubenswrapper[4675]: I1121 15:12:15.428813 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zm72j_3f6b3f8e-0776-47f2-bbe4-ed0d6af49813/registry-server/0.log" Nov 21 15:12:19 crc kubenswrapper[4675]: I1121 15:12:19.848799 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:12:19 crc kubenswrapper[4675]: E1121 15:12:19.849779 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:12:26 crc kubenswrapper[4675]: I1121 15:12:26.242300 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-mx844_ce64e510-eca3-48f5-858d-165c3d3cfba7/prometheus-operator/0.log" Nov 21 15:12:26 crc kubenswrapper[4675]: I1121 15:12:26.433809 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_320effcd-ec3e-4741-b3d9-e0ec17502e50/prometheus-operator-admission-webhook/0.log" Nov 21 15:12:26 crc kubenswrapper[4675]: I1121 15:12:26.482623 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_1a50974d-f334-4845-b892-5e4b97fc3d79/prometheus-operator-admission-webhook/0.log" Nov 21 15:12:26 crc kubenswrapper[4675]: I1121 15:12:26.638813 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-cb77k_57e668d4-e3df-4b36-ad58-51e5b7f2d16e/operator/0.log" Nov 21 15:12:26 crc kubenswrapper[4675]: I1121 15:12:26.686139 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-mdzgh_1d6f8b49-15cf-404d-8bda-1ae7a7292d2b/observability-ui-dashboards/0.log" Nov 21 15:12:26 crc kubenswrapper[4675]: I1121 15:12:26.809280 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-67wfr_01c95951-b168-42ed-aab7-9ffe813b6d55/perses-operator/0.log" Nov 21 15:12:31 crc kubenswrapper[4675]: I1121 15:12:31.852345 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:12:31 crc kubenswrapper[4675]: E1121 15:12:31.854032 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:12:39 crc kubenswrapper[4675]: I1121 15:12:39.434034 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-58d5765bd4-29h67_e7c19bc7-9927-4cb7-98e2-2f834e3ff496/kube-rbac-proxy/0.log" Nov 21 15:12:39 crc kubenswrapper[4675]: I1121 15:12:39.498226 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-58d5765bd4-29h67_e7c19bc7-9927-4cb7-98e2-2f834e3ff496/manager/0.log" Nov 21 15:12:45 crc kubenswrapper[4675]: I1121 15:12:45.849177 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:12:45 crc kubenswrapper[4675]: E1121 15:12:45.849946 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:12:59 crc kubenswrapper[4675]: I1121 15:12:59.849600 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:12:59 crc kubenswrapper[4675]: E1121 15:12:59.850764 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:13:14 crc kubenswrapper[4675]: I1121 15:13:14.858690 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:13:14 crc kubenswrapper[4675]: E1121 15:13:14.859943 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:13:27 crc kubenswrapper[4675]: I1121 15:13:27.849907 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:13:27 crc kubenswrapper[4675]: E1121 15:13:27.850874 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:13:38 crc kubenswrapper[4675]: I1121 15:13:38.850056 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:13:38 crc kubenswrapper[4675]: E1121 15:13:38.851155 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:13:51 crc kubenswrapper[4675]: I1121 15:13:51.849629 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:13:51 crc kubenswrapper[4675]: E1121 15:13:51.850987 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:14:05 crc kubenswrapper[4675]: I1121 15:14:05.848945 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:14:05 crc kubenswrapper[4675]: E1121 15:14:05.849994 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:14:17 crc kubenswrapper[4675]: I1121 15:14:17.848668 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:14:18 crc kubenswrapper[4675]: I1121 15:14:18.715129 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"f4a54f482157d9a82e4690ff9119779cc0b9536c7b965dffc48df0e2a1cda567"} Nov 21 15:14:34 crc kubenswrapper[4675]: I1121 15:14:34.901530 4675 generic.go:334] "Generic (PLEG): container finished" podID="cc98edbf-7660-4c65-adfb-c44fea8df67b" containerID="5336adc09858118991b6d9f9645bfa15ddfd9822de517a57110b6a79e4a494af" exitCode=0 Nov 21 15:14:34 crc kubenswrapper[4675]: I1121 15:14:34.901622 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l4gdg/must-gather-g8mkd" event={"ID":"cc98edbf-7660-4c65-adfb-c44fea8df67b","Type":"ContainerDied","Data":"5336adc09858118991b6d9f9645bfa15ddfd9822de517a57110b6a79e4a494af"} Nov 21 15:14:34 crc kubenswrapper[4675]: I1121 15:14:34.902909 4675 scope.go:117] "RemoveContainer" containerID="5336adc09858118991b6d9f9645bfa15ddfd9822de517a57110b6a79e4a494af" Nov 21 15:14:35 crc kubenswrapper[4675]: I1121 15:14:35.556766 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l4gdg_must-gather-g8mkd_cc98edbf-7660-4c65-adfb-c44fea8df67b/gather/0.log" Nov 21 15:14:44 crc kubenswrapper[4675]: I1121 15:14:44.091774 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l4gdg/must-gather-g8mkd"] Nov 21 15:14:44 crc kubenswrapper[4675]: I1121 15:14:44.092547 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-l4gdg/must-gather-g8mkd" podUID="cc98edbf-7660-4c65-adfb-c44fea8df67b" containerName="copy" containerID="cri-o://ceae17209a173ca64360ebdefa2c69869f9c2543dffa06572b693b41b9752fa5" gracePeriod=2 Nov 21 15:14:44 crc kubenswrapper[4675]: I1121 15:14:44.108787 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l4gdg/must-gather-g8mkd"] Nov 21 15:14:44 crc kubenswrapper[4675]: I1121 15:14:44.855010 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l4gdg_must-gather-g8mkd_cc98edbf-7660-4c65-adfb-c44fea8df67b/copy/0.log" Nov 21 15:14:44 crc kubenswrapper[4675]: I1121 15:14:44.855986 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/must-gather-g8mkd" Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.003762 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cc98edbf-7660-4c65-adfb-c44fea8df67b-must-gather-output\") pod \"cc98edbf-7660-4c65-adfb-c44fea8df67b\" (UID: \"cc98edbf-7660-4c65-adfb-c44fea8df67b\") " Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.003849 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgvlw\" (UniqueName: \"kubernetes.io/projected/cc98edbf-7660-4c65-adfb-c44fea8df67b-kube-api-access-dgvlw\") pod \"cc98edbf-7660-4c65-adfb-c44fea8df67b\" (UID: \"cc98edbf-7660-4c65-adfb-c44fea8df67b\") " Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.006813 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l4gdg_must-gather-g8mkd_cc98edbf-7660-4c65-adfb-c44fea8df67b/copy/0.log" Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.007244 4675 generic.go:334] "Generic (PLEG): container finished" podID="cc98edbf-7660-4c65-adfb-c44fea8df67b" containerID="ceae17209a173ca64360ebdefa2c69869f9c2543dffa06572b693b41b9752fa5" exitCode=143 Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.007294 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l4gdg/must-gather-g8mkd" Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.007325 4675 scope.go:117] "RemoveContainer" containerID="ceae17209a173ca64360ebdefa2c69869f9c2543dffa06572b693b41b9752fa5" Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.016867 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc98edbf-7660-4c65-adfb-c44fea8df67b-kube-api-access-dgvlw" (OuterVolumeSpecName: "kube-api-access-dgvlw") pod "cc98edbf-7660-4c65-adfb-c44fea8df67b" (UID: "cc98edbf-7660-4c65-adfb-c44fea8df67b"). InnerVolumeSpecName "kube-api-access-dgvlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.073715 4675 scope.go:117] "RemoveContainer" containerID="5336adc09858118991b6d9f9645bfa15ddfd9822de517a57110b6a79e4a494af" Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.106971 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgvlw\" (UniqueName: \"kubernetes.io/projected/cc98edbf-7660-4c65-adfb-c44fea8df67b-kube-api-access-dgvlw\") on node \"crc\" DevicePath \"\"" Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.185748 4675 scope.go:117] "RemoveContainer" containerID="ceae17209a173ca64360ebdefa2c69869f9c2543dffa06572b693b41b9752fa5" Nov 21 15:14:45 crc kubenswrapper[4675]: E1121 15:14:45.190521 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceae17209a173ca64360ebdefa2c69869f9c2543dffa06572b693b41b9752fa5\": container with ID starting with ceae17209a173ca64360ebdefa2c69869f9c2543dffa06572b693b41b9752fa5 not found: ID does not exist" containerID="ceae17209a173ca64360ebdefa2c69869f9c2543dffa06572b693b41b9752fa5" Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.190599 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceae17209a173ca64360ebdefa2c69869f9c2543dffa06572b693b41b9752fa5"} err="failed to get container status \"ceae17209a173ca64360ebdefa2c69869f9c2543dffa06572b693b41b9752fa5\": rpc error: code = NotFound desc = could not find container \"ceae17209a173ca64360ebdefa2c69869f9c2543dffa06572b693b41b9752fa5\": container with ID starting with ceae17209a173ca64360ebdefa2c69869f9c2543dffa06572b693b41b9752fa5 not found: ID does not exist" Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.190635 4675 scope.go:117] "RemoveContainer" containerID="5336adc09858118991b6d9f9645bfa15ddfd9822de517a57110b6a79e4a494af" Nov 21 15:14:45 crc kubenswrapper[4675]: E1121 15:14:45.191303 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5336adc09858118991b6d9f9645bfa15ddfd9822de517a57110b6a79e4a494af\": container with ID starting with 5336adc09858118991b6d9f9645bfa15ddfd9822de517a57110b6a79e4a494af not found: ID does not exist" containerID="5336adc09858118991b6d9f9645bfa15ddfd9822de517a57110b6a79e4a494af" Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.191341 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5336adc09858118991b6d9f9645bfa15ddfd9822de517a57110b6a79e4a494af"} err="failed to get container status \"5336adc09858118991b6d9f9645bfa15ddfd9822de517a57110b6a79e4a494af\": rpc error: code = NotFound desc = could not find container \"5336adc09858118991b6d9f9645bfa15ddfd9822de517a57110b6a79e4a494af\": container with ID starting with 5336adc09858118991b6d9f9645bfa15ddfd9822de517a57110b6a79e4a494af not found: ID does not exist" Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.203876 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc98edbf-7660-4c65-adfb-c44fea8df67b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cc98edbf-7660-4c65-adfb-c44fea8df67b" (UID: "cc98edbf-7660-4c65-adfb-c44fea8df67b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:14:45 crc kubenswrapper[4675]: I1121 15:14:45.211955 4675 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cc98edbf-7660-4c65-adfb-c44fea8df67b-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 21 15:14:46 crc kubenswrapper[4675]: I1121 15:14:46.862104 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc98edbf-7660-4c65-adfb-c44fea8df67b" path="/var/lib/kubelet/pods/cc98edbf-7660-4c65-adfb-c44fea8df67b/volumes" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.345441 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd"] Nov 21 15:15:00 crc kubenswrapper[4675]: E1121 15:15:00.346635 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab92e0e-080b-4554-8608-a26ff174b646" containerName="container-00" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.346656 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab92e0e-080b-4554-8608-a26ff174b646" containerName="container-00" Nov 21 15:15:00 crc kubenswrapper[4675]: E1121 15:15:00.346686 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.346696 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" Nov 21 15:15:00 crc kubenswrapper[4675]: E1121 15:15:00.346707 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.346715 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" Nov 21 15:15:00 crc kubenswrapper[4675]: E1121 15:15:00.346737 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="extract-content" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.346744 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="extract-content" Nov 21 15:15:00 crc kubenswrapper[4675]: E1121 15:15:00.346786 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc98edbf-7660-4c65-adfb-c44fea8df67b" containerName="gather" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.346795 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc98edbf-7660-4c65-adfb-c44fea8df67b" containerName="gather" Nov 21 15:15:00 crc kubenswrapper[4675]: E1121 15:15:00.346809 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc98edbf-7660-4c65-adfb-c44fea8df67b" containerName="copy" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.346815 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc98edbf-7660-4c65-adfb-c44fea8df67b" containerName="copy" Nov 21 15:15:00 crc kubenswrapper[4675]: E1121 15:15:00.346835 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="extract-utilities" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.346841 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="extract-utilities" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.347082 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.347099 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab92e0e-080b-4554-8608-a26ff174b646" containerName="container-00" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.347120 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc98edbf-7660-4c65-adfb-c44fea8df67b" containerName="copy" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.347131 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc98edbf-7660-4c65-adfb-c44fea8df67b" containerName="gather" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.349295 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.351668 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.359472 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd"] Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.366963 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.460099 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1f48ca6-f535-4541-af51-ebfb3942a8ea-config-volume\") pod \"collect-profiles-29395635-hcvsd\" (UID: \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.460157 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znw85\" (UniqueName: \"kubernetes.io/projected/b1f48ca6-f535-4541-af51-ebfb3942a8ea-kube-api-access-znw85\") pod \"collect-profiles-29395635-hcvsd\" (UID: \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.460614 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1f48ca6-f535-4541-af51-ebfb3942a8ea-secret-volume\") pod \"collect-profiles-29395635-hcvsd\" (UID: \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.562981 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1f48ca6-f535-4541-af51-ebfb3942a8ea-config-volume\") pod \"collect-profiles-29395635-hcvsd\" (UID: \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.563298 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znw85\" (UniqueName: \"kubernetes.io/projected/b1f48ca6-f535-4541-af51-ebfb3942a8ea-kube-api-access-znw85\") pod \"collect-profiles-29395635-hcvsd\" (UID: \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.563454 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1f48ca6-f535-4541-af51-ebfb3942a8ea-secret-volume\") pod \"collect-profiles-29395635-hcvsd\" (UID: \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.563991 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1f48ca6-f535-4541-af51-ebfb3942a8ea-config-volume\") pod \"collect-profiles-29395635-hcvsd\" (UID: \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.570230 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1f48ca6-f535-4541-af51-ebfb3942a8ea-secret-volume\") pod \"collect-profiles-29395635-hcvsd\" (UID: \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.580394 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znw85\" (UniqueName: \"kubernetes.io/projected/b1f48ca6-f535-4541-af51-ebfb3942a8ea-kube-api-access-znw85\") pod \"collect-profiles-29395635-hcvsd\" (UID: \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" Nov 21 15:15:00 crc kubenswrapper[4675]: I1121 15:15:00.672930 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" Nov 21 15:15:01 crc kubenswrapper[4675]: I1121 15:15:01.255801 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd"] Nov 21 15:15:02 crc kubenswrapper[4675]: I1121 15:15:02.179405 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" event={"ID":"b1f48ca6-f535-4541-af51-ebfb3942a8ea","Type":"ContainerStarted","Data":"b44391abfd40c30fcebee57371b7c16ab7bdc9cb90c040819179f89297c7f22b"} Nov 21 15:15:02 crc kubenswrapper[4675]: I1121 15:15:02.179759 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" event={"ID":"b1f48ca6-f535-4541-af51-ebfb3942a8ea","Type":"ContainerStarted","Data":"87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce"} Nov 21 15:15:02 crc kubenswrapper[4675]: I1121 15:15:02.203586 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" podStartSLOduration=2.203564249 podStartE2EDuration="2.203564249s" podCreationTimestamp="2025-11-21 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:15:02.194606885 +0000 UTC m=+6178.921021632" watchObservedRunningTime="2025-11-21 15:15:02.203564249 +0000 UTC m=+6178.929978976" Nov 21 15:15:03 crc kubenswrapper[4675]: I1121 15:15:03.191683 4675 generic.go:334] "Generic (PLEG): container finished" podID="b1f48ca6-f535-4541-af51-ebfb3942a8ea" containerID="b44391abfd40c30fcebee57371b7c16ab7bdc9cb90c040819179f89297c7f22b" exitCode=0 Nov 21 15:15:03 crc kubenswrapper[4675]: I1121 15:15:03.191729 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" event={"ID":"b1f48ca6-f535-4541-af51-ebfb3942a8ea","Type":"ContainerDied","Data":"b44391abfd40c30fcebee57371b7c16ab7bdc9cb90c040819179f89297c7f22b"} Nov 21 15:15:04 crc kubenswrapper[4675]: I1121 15:15:04.652090 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" Nov 21 15:15:04 crc kubenswrapper[4675]: I1121 15:15:04.770706 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1f48ca6-f535-4541-af51-ebfb3942a8ea-secret-volume\") pod \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\" (UID: \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\") " Nov 21 15:15:04 crc kubenswrapper[4675]: I1121 15:15:04.770764 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1f48ca6-f535-4541-af51-ebfb3942a8ea-config-volume\") pod \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\" (UID: \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\") " Nov 21 15:15:04 crc kubenswrapper[4675]: I1121 15:15:04.770796 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znw85\" (UniqueName: \"kubernetes.io/projected/b1f48ca6-f535-4541-af51-ebfb3942a8ea-kube-api-access-znw85\") pod \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\" (UID: \"b1f48ca6-f535-4541-af51-ebfb3942a8ea\") " Nov 21 15:15:04 crc kubenswrapper[4675]: I1121 15:15:04.771578 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f48ca6-f535-4541-af51-ebfb3942a8ea-config-volume" (OuterVolumeSpecName: "config-volume") pod "b1f48ca6-f535-4541-af51-ebfb3942a8ea" (UID: "b1f48ca6-f535-4541-af51-ebfb3942a8ea"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:15:04 crc kubenswrapper[4675]: I1121 15:15:04.777536 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f48ca6-f535-4541-af51-ebfb3942a8ea-kube-api-access-znw85" (OuterVolumeSpecName: "kube-api-access-znw85") pod "b1f48ca6-f535-4541-af51-ebfb3942a8ea" (UID: "b1f48ca6-f535-4541-af51-ebfb3942a8ea"). InnerVolumeSpecName "kube-api-access-znw85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:15:04 crc kubenswrapper[4675]: I1121 15:15:04.780268 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1f48ca6-f535-4541-af51-ebfb3942a8ea-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b1f48ca6-f535-4541-af51-ebfb3942a8ea" (UID: "b1f48ca6-f535-4541-af51-ebfb3942a8ea"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:15:04 crc kubenswrapper[4675]: I1121 15:15:04.873814 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1f48ca6-f535-4541-af51-ebfb3942a8ea-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:04 crc kubenswrapper[4675]: I1121 15:15:04.874076 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1f48ca6-f535-4541-af51-ebfb3942a8ea-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:04 crc kubenswrapper[4675]: I1121 15:15:04.874087 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znw85\" (UniqueName: \"kubernetes.io/projected/b1f48ca6-f535-4541-af51-ebfb3942a8ea-kube-api-access-znw85\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:05 crc kubenswrapper[4675]: I1121 15:15:05.215339 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" event={"ID":"b1f48ca6-f535-4541-af51-ebfb3942a8ea","Type":"ContainerDied","Data":"87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce"} Nov 21 15:15:05 crc kubenswrapper[4675]: I1121 15:15:05.215376 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce" Nov 21 15:15:05 crc kubenswrapper[4675]: I1121 15:15:05.215452 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-hcvsd" Nov 21 15:15:05 crc kubenswrapper[4675]: I1121 15:15:05.280243 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4"] Nov 21 15:15:05 crc kubenswrapper[4675]: I1121 15:15:05.291648 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395590-prxr4"] Nov 21 15:15:06 crc kubenswrapper[4675]: I1121 15:15:06.862345 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="032d0414-27e3-499b-9259-a2ef3a97083d" path="/var/lib/kubelet/pods/032d0414-27e3-499b-9259-a2ef3a97083d/volumes" Nov 21 15:15:08 crc kubenswrapper[4675]: E1121 15:15:08.800170 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice/crio-87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce\": RecentStats: unable to find data in memory cache]" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.455767 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nkkqv"] Nov 21 15:15:11 crc kubenswrapper[4675]: E1121 15:15:11.457436 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f48ca6-f535-4541-af51-ebfb3942a8ea" containerName="collect-profiles" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.457455 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f48ca6-f535-4541-af51-ebfb3942a8ea" containerName="collect-profiles" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.457765 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f48ca6-f535-4541-af51-ebfb3942a8ea" containerName="collect-profiles" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.457793 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60cf358-02ab-4432-b283-ccb3366f9bea" containerName="registry-server" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.512580 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.528754 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nkkqv"] Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.631389 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc855a8-0e49-4b22-ac88-f91c50706d58-catalog-content\") pod \"certified-operators-nkkqv\" (UID: \"0fc855a8-0e49-4b22-ac88-f91c50706d58\") " pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.632089 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pfrj\" (UniqueName: \"kubernetes.io/projected/0fc855a8-0e49-4b22-ac88-f91c50706d58-kube-api-access-2pfrj\") pod \"certified-operators-nkkqv\" (UID: \"0fc855a8-0e49-4b22-ac88-f91c50706d58\") " pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.632269 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc855a8-0e49-4b22-ac88-f91c50706d58-utilities\") pod \"certified-operators-nkkqv\" (UID: \"0fc855a8-0e49-4b22-ac88-f91c50706d58\") " pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.734546 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pfrj\" (UniqueName: \"kubernetes.io/projected/0fc855a8-0e49-4b22-ac88-f91c50706d58-kube-api-access-2pfrj\") pod \"certified-operators-nkkqv\" (UID: \"0fc855a8-0e49-4b22-ac88-f91c50706d58\") " pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.734647 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc855a8-0e49-4b22-ac88-f91c50706d58-utilities\") pod \"certified-operators-nkkqv\" (UID: \"0fc855a8-0e49-4b22-ac88-f91c50706d58\") " pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.734721 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc855a8-0e49-4b22-ac88-f91c50706d58-catalog-content\") pod \"certified-operators-nkkqv\" (UID: \"0fc855a8-0e49-4b22-ac88-f91c50706d58\") " pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.735304 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc855a8-0e49-4b22-ac88-f91c50706d58-catalog-content\") pod \"certified-operators-nkkqv\" (UID: \"0fc855a8-0e49-4b22-ac88-f91c50706d58\") " pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.735314 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc855a8-0e49-4b22-ac88-f91c50706d58-utilities\") pod \"certified-operators-nkkqv\" (UID: \"0fc855a8-0e49-4b22-ac88-f91c50706d58\") " pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.745838 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z5zs5"] Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.748152 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.772930 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pfrj\" (UniqueName: \"kubernetes.io/projected/0fc855a8-0e49-4b22-ac88-f91c50706d58-kube-api-access-2pfrj\") pod \"certified-operators-nkkqv\" (UID: \"0fc855a8-0e49-4b22-ac88-f91c50706d58\") " pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.826245 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5zs5"] Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.835808 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.837945 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb0079f3-703d-46b4-aa87-260214373f87-utilities\") pod \"community-operators-z5zs5\" (UID: \"fb0079f3-703d-46b4-aa87-260214373f87\") " pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.838032 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb0079f3-703d-46b4-aa87-260214373f87-catalog-content\") pod \"community-operators-z5zs5\" (UID: \"fb0079f3-703d-46b4-aa87-260214373f87\") " pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.838081 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nclx\" (UniqueName: \"kubernetes.io/projected/fb0079f3-703d-46b4-aa87-260214373f87-kube-api-access-9nclx\") pod \"community-operators-z5zs5\" (UID: \"fb0079f3-703d-46b4-aa87-260214373f87\") " pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.954438 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb0079f3-703d-46b4-aa87-260214373f87-catalog-content\") pod \"community-operators-z5zs5\" (UID: \"fb0079f3-703d-46b4-aa87-260214373f87\") " pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.954516 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nclx\" (UniqueName: \"kubernetes.io/projected/fb0079f3-703d-46b4-aa87-260214373f87-kube-api-access-9nclx\") pod \"community-operators-z5zs5\" (UID: \"fb0079f3-703d-46b4-aa87-260214373f87\") " pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.954796 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb0079f3-703d-46b4-aa87-260214373f87-utilities\") pod \"community-operators-z5zs5\" (UID: \"fb0079f3-703d-46b4-aa87-260214373f87\") " pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.955640 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb0079f3-703d-46b4-aa87-260214373f87-catalog-content\") pod \"community-operators-z5zs5\" (UID: \"fb0079f3-703d-46b4-aa87-260214373f87\") " pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:11 crc kubenswrapper[4675]: I1121 15:15:11.955658 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb0079f3-703d-46b4-aa87-260214373f87-utilities\") pod \"community-operators-z5zs5\" (UID: \"fb0079f3-703d-46b4-aa87-260214373f87\") " pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:12 crc kubenswrapper[4675]: I1121 15:15:12.003822 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nclx\" (UniqueName: \"kubernetes.io/projected/fb0079f3-703d-46b4-aa87-260214373f87-kube-api-access-9nclx\") pod \"community-operators-z5zs5\" (UID: \"fb0079f3-703d-46b4-aa87-260214373f87\") " pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:12 crc kubenswrapper[4675]: I1121 15:15:12.138329 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:12 crc kubenswrapper[4675]: I1121 15:15:12.487981 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nkkqv"] Nov 21 15:15:12 crc kubenswrapper[4675]: W1121 15:15:12.496553 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fc855a8_0e49_4b22_ac88_f91c50706d58.slice/crio-3d26a994ecc662f69340306027a9d991e08f22f70e64fd37a4b9de5a8ec3ec3f WatchSource:0}: Error finding container 3d26a994ecc662f69340306027a9d991e08f22f70e64fd37a4b9de5a8ec3ec3f: Status 404 returned error can't find the container with id 3d26a994ecc662f69340306027a9d991e08f22f70e64fd37a4b9de5a8ec3ec3f Nov 21 15:15:12 crc kubenswrapper[4675]: I1121 15:15:12.682866 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5zs5"] Nov 21 15:15:12 crc kubenswrapper[4675]: W1121 15:15:12.686384 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb0079f3_703d_46b4_aa87_260214373f87.slice/crio-979029d99f52ccd3c53550ed86a054b740c1a86916516ce1b5f2e601ca70cb09 WatchSource:0}: Error finding container 979029d99f52ccd3c53550ed86a054b740c1a86916516ce1b5f2e601ca70cb09: Status 404 returned error can't find the container with id 979029d99f52ccd3c53550ed86a054b740c1a86916516ce1b5f2e601ca70cb09 Nov 21 15:15:13 crc kubenswrapper[4675]: E1121 15:15:13.074681 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice/crio-87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce\": RecentStats: unable to find data in memory cache]" Nov 21 15:15:13 crc kubenswrapper[4675]: I1121 15:15:13.321654 4675 generic.go:334] "Generic (PLEG): container finished" podID="0fc855a8-0e49-4b22-ac88-f91c50706d58" containerID="df7781a383c6fdf82576098bd557cabe670c8ae08751cdb2c4c493f7f516be5b" exitCode=0 Nov 21 15:15:13 crc kubenswrapper[4675]: I1121 15:15:13.321778 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkkqv" event={"ID":"0fc855a8-0e49-4b22-ac88-f91c50706d58","Type":"ContainerDied","Data":"df7781a383c6fdf82576098bd557cabe670c8ae08751cdb2c4c493f7f516be5b"} Nov 21 15:15:13 crc kubenswrapper[4675]: I1121 15:15:13.321822 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkkqv" event={"ID":"0fc855a8-0e49-4b22-ac88-f91c50706d58","Type":"ContainerStarted","Data":"3d26a994ecc662f69340306027a9d991e08f22f70e64fd37a4b9de5a8ec3ec3f"} Nov 21 15:15:13 crc kubenswrapper[4675]: I1121 15:15:13.323538 4675 generic.go:334] "Generic (PLEG): container finished" podID="fb0079f3-703d-46b4-aa87-260214373f87" containerID="e52bd2ecd0b23c6b24c3062212f27aa0a3ececad0da1f3eeeeeffd8c0355325c" exitCode=0 Nov 21 15:15:13 crc kubenswrapper[4675]: I1121 15:15:13.323582 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5zs5" event={"ID":"fb0079f3-703d-46b4-aa87-260214373f87","Type":"ContainerDied","Data":"e52bd2ecd0b23c6b24c3062212f27aa0a3ececad0da1f3eeeeeffd8c0355325c"} Nov 21 15:15:13 crc kubenswrapper[4675]: I1121 15:15:13.323609 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5zs5" event={"ID":"fb0079f3-703d-46b4-aa87-260214373f87","Type":"ContainerStarted","Data":"979029d99f52ccd3c53550ed86a054b740c1a86916516ce1b5f2e601ca70cb09"} Nov 21 15:15:13 crc kubenswrapper[4675]: I1121 15:15:13.326671 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.137707 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4gx9g"] Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.141310 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.162811 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gx9g"] Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.239783 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x88s5\" (UniqueName: \"kubernetes.io/projected/6991c602-7b13-4940-b217-32cccef58f2e-kube-api-access-x88s5\") pod \"redhat-marketplace-4gx9g\" (UID: \"6991c602-7b13-4940-b217-32cccef58f2e\") " pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.239971 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6991c602-7b13-4940-b217-32cccef58f2e-catalog-content\") pod \"redhat-marketplace-4gx9g\" (UID: \"6991c602-7b13-4940-b217-32cccef58f2e\") " pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.240130 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6991c602-7b13-4940-b217-32cccef58f2e-utilities\") pod \"redhat-marketplace-4gx9g\" (UID: \"6991c602-7b13-4940-b217-32cccef58f2e\") " pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.342743 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6991c602-7b13-4940-b217-32cccef58f2e-utilities\") pod \"redhat-marketplace-4gx9g\" (UID: \"6991c602-7b13-4940-b217-32cccef58f2e\") " pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.342907 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x88s5\" (UniqueName: \"kubernetes.io/projected/6991c602-7b13-4940-b217-32cccef58f2e-kube-api-access-x88s5\") pod \"redhat-marketplace-4gx9g\" (UID: \"6991c602-7b13-4940-b217-32cccef58f2e\") " pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.343043 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6991c602-7b13-4940-b217-32cccef58f2e-catalog-content\") pod \"redhat-marketplace-4gx9g\" (UID: \"6991c602-7b13-4940-b217-32cccef58f2e\") " pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.343835 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6991c602-7b13-4940-b217-32cccef58f2e-catalog-content\") pod \"redhat-marketplace-4gx9g\" (UID: \"6991c602-7b13-4940-b217-32cccef58f2e\") " pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.344124 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6991c602-7b13-4940-b217-32cccef58f2e-utilities\") pod \"redhat-marketplace-4gx9g\" (UID: \"6991c602-7b13-4940-b217-32cccef58f2e\") " pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.353755 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkkqv" event={"ID":"0fc855a8-0e49-4b22-ac88-f91c50706d58","Type":"ContainerStarted","Data":"0a30b294c5b48c8bde2e9d4e96d62b2fe9f0fbe4d6881e76935b5f5e1310b8e5"} Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.355988 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5zs5" event={"ID":"fb0079f3-703d-46b4-aa87-260214373f87","Type":"ContainerStarted","Data":"573ab94e74a0e2da9d97a20afcf9c9f13ed37a7c4f221d4ec8e55c21d6526299"} Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.370933 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x88s5\" (UniqueName: \"kubernetes.io/projected/6991c602-7b13-4940-b217-32cccef58f2e-kube-api-access-x88s5\") pod \"redhat-marketplace-4gx9g\" (UID: \"6991c602-7b13-4940-b217-32cccef58f2e\") " pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.460717 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:15 crc kubenswrapper[4675]: I1121 15:15:15.994828 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gx9g"] Nov 21 15:15:15 crc kubenswrapper[4675]: W1121 15:15:15.998087 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6991c602_7b13_4940_b217_32cccef58f2e.slice/crio-816166aad301b23337364561e60a8873d70de652e2a19b4484f3a9e8922a16ea WatchSource:0}: Error finding container 816166aad301b23337364561e60a8873d70de652e2a19b4484f3a9e8922a16ea: Status 404 returned error can't find the container with id 816166aad301b23337364561e60a8873d70de652e2a19b4484f3a9e8922a16ea Nov 21 15:15:16 crc kubenswrapper[4675]: I1121 15:15:16.368670 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gx9g" event={"ID":"6991c602-7b13-4940-b217-32cccef58f2e","Type":"ContainerStarted","Data":"ef75d8d3c89261cd1bdd261537113c2ddd736e92376130a535c0a0ee9b68635b"} Nov 21 15:15:16 crc kubenswrapper[4675]: I1121 15:15:16.368885 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gx9g" event={"ID":"6991c602-7b13-4940-b217-32cccef58f2e","Type":"ContainerStarted","Data":"816166aad301b23337364561e60a8873d70de652e2a19b4484f3a9e8922a16ea"} Nov 21 15:15:17 crc kubenswrapper[4675]: I1121 15:15:17.381245 4675 generic.go:334] "Generic (PLEG): container finished" podID="6991c602-7b13-4940-b217-32cccef58f2e" containerID="ef75d8d3c89261cd1bdd261537113c2ddd736e92376130a535c0a0ee9b68635b" exitCode=0 Nov 21 15:15:17 crc kubenswrapper[4675]: I1121 15:15:17.381496 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gx9g" event={"ID":"6991c602-7b13-4940-b217-32cccef58f2e","Type":"ContainerDied","Data":"ef75d8d3c89261cd1bdd261537113c2ddd736e92376130a535c0a0ee9b68635b"} Nov 21 15:15:19 crc kubenswrapper[4675]: E1121 15:15:19.168907 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fc855a8_0e49_4b22_ac88_f91c50706d58.slice/crio-0a30b294c5b48c8bde2e9d4e96d62b2fe9f0fbe4d6881e76935b5f5e1310b8e5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice/crio-87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice\": RecentStats: unable to find data in memory cache]" Nov 21 15:15:19 crc kubenswrapper[4675]: I1121 15:15:19.404697 4675 generic.go:334] "Generic (PLEG): container finished" podID="0fc855a8-0e49-4b22-ac88-f91c50706d58" containerID="0a30b294c5b48c8bde2e9d4e96d62b2fe9f0fbe4d6881e76935b5f5e1310b8e5" exitCode=0 Nov 21 15:15:19 crc kubenswrapper[4675]: I1121 15:15:19.404780 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkkqv" event={"ID":"0fc855a8-0e49-4b22-ac88-f91c50706d58","Type":"ContainerDied","Data":"0a30b294c5b48c8bde2e9d4e96d62b2fe9f0fbe4d6881e76935b5f5e1310b8e5"} Nov 21 15:15:19 crc kubenswrapper[4675]: I1121 15:15:19.408093 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gx9g" event={"ID":"6991c602-7b13-4940-b217-32cccef58f2e","Type":"ContainerStarted","Data":"8d563ce688b724662dc764a1e2e233d062459541651b00f74185b1d70f2d0ec4"} Nov 21 15:15:21 crc kubenswrapper[4675]: I1121 15:15:21.437520 4675 generic.go:334] "Generic (PLEG): container finished" podID="fb0079f3-703d-46b4-aa87-260214373f87" containerID="573ab94e74a0e2da9d97a20afcf9c9f13ed37a7c4f221d4ec8e55c21d6526299" exitCode=0 Nov 21 15:15:21 crc kubenswrapper[4675]: I1121 15:15:21.437565 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5zs5" event={"ID":"fb0079f3-703d-46b4-aa87-260214373f87","Type":"ContainerDied","Data":"573ab94e74a0e2da9d97a20afcf9c9f13ed37a7c4f221d4ec8e55c21d6526299"} Nov 21 15:15:24 crc kubenswrapper[4675]: I1121 15:15:24.480263 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkkqv" event={"ID":"0fc855a8-0e49-4b22-ac88-f91c50706d58","Type":"ContainerStarted","Data":"1f771fceefa65031c9774239a1cde0e2eae13524fc2a67d50dfbe0453b6d437a"} Nov 21 15:15:24 crc kubenswrapper[4675]: I1121 15:15:24.502964 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nkkqv" podStartSLOduration=3.184212245 podStartE2EDuration="13.502946856s" podCreationTimestamp="2025-11-21 15:15:11 +0000 UTC" firstStartedPulling="2025-11-21 15:15:13.324691289 +0000 UTC m=+6190.051106016" lastFinishedPulling="2025-11-21 15:15:23.6434259 +0000 UTC m=+6200.369840627" observedRunningTime="2025-11-21 15:15:24.497158811 +0000 UTC m=+6201.223573538" watchObservedRunningTime="2025-11-21 15:15:24.502946856 +0000 UTC m=+6201.229361583" Nov 21 15:15:25 crc kubenswrapper[4675]: I1121 15:15:25.493139 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5zs5" event={"ID":"fb0079f3-703d-46b4-aa87-260214373f87","Type":"ContainerStarted","Data":"e85ab359d6bab03ca6b4d1788662d21226112f10055d0d3d069185394dcbd730"} Nov 21 15:15:25 crc kubenswrapper[4675]: I1121 15:15:25.520104 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z5zs5" podStartSLOduration=2.870507411 podStartE2EDuration="14.520081268s" podCreationTimestamp="2025-11-21 15:15:11 +0000 UTC" firstStartedPulling="2025-11-21 15:15:13.325142781 +0000 UTC m=+6190.051557508" lastFinishedPulling="2025-11-21 15:15:24.974716628 +0000 UTC m=+6201.701131365" observedRunningTime="2025-11-21 15:15:25.51298571 +0000 UTC m=+6202.239400437" watchObservedRunningTime="2025-11-21 15:15:25.520081268 +0000 UTC m=+6202.246495995" Nov 21 15:15:26 crc kubenswrapper[4675]: I1121 15:15:26.506305 4675 generic.go:334] "Generic (PLEG): container finished" podID="6991c602-7b13-4940-b217-32cccef58f2e" containerID="8d563ce688b724662dc764a1e2e233d062459541651b00f74185b1d70f2d0ec4" exitCode=0 Nov 21 15:15:26 crc kubenswrapper[4675]: I1121 15:15:26.506380 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gx9g" event={"ID":"6991c602-7b13-4940-b217-32cccef58f2e","Type":"ContainerDied","Data":"8d563ce688b724662dc764a1e2e233d062459541651b00f74185b1d70f2d0ec4"} Nov 21 15:15:27 crc kubenswrapper[4675]: I1121 15:15:27.519208 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gx9g" event={"ID":"6991c602-7b13-4940-b217-32cccef58f2e","Type":"ContainerStarted","Data":"e27b431fb30729c8e39d2f55a50444fa28bea1c9d392eb4e27c6a6eabf828b79"} Nov 21 15:15:27 crc kubenswrapper[4675]: I1121 15:15:27.538943 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4gx9g" podStartSLOduration=2.9248303140000003 podStartE2EDuration="12.538924056s" podCreationTimestamp="2025-11-21 15:15:15 +0000 UTC" firstStartedPulling="2025-11-21 15:15:17.383416302 +0000 UTC m=+6194.109831019" lastFinishedPulling="2025-11-21 15:15:26.997510034 +0000 UTC m=+6203.723924761" observedRunningTime="2025-11-21 15:15:27.535116591 +0000 UTC m=+6204.261531318" watchObservedRunningTime="2025-11-21 15:15:27.538924056 +0000 UTC m=+6204.265338783" Nov 21 15:15:28 crc kubenswrapper[4675]: E1121 15:15:28.343417 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice/crio-87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice\": RecentStats: unable to find data in memory cache]" Nov 21 15:15:29 crc kubenswrapper[4675]: E1121 15:15:29.219730 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice/crio-87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce\": RecentStats: unable to find data in memory cache]" Nov 21 15:15:31 crc kubenswrapper[4675]: I1121 15:15:31.836666 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:31 crc kubenswrapper[4675]: I1121 15:15:31.837643 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:32 crc kubenswrapper[4675]: I1121 15:15:32.139144 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:32 crc kubenswrapper[4675]: I1121 15:15:32.140598 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:32 crc kubenswrapper[4675]: I1121 15:15:32.892185 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nkkqv" podUID="0fc855a8-0e49-4b22-ac88-f91c50706d58" containerName="registry-server" probeResult="failure" output=< Nov 21 15:15:32 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:15:32 crc kubenswrapper[4675]: > Nov 21 15:15:33 crc kubenswrapper[4675]: I1121 15:15:33.194982 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z5zs5" podUID="fb0079f3-703d-46b4-aa87-260214373f87" containerName="registry-server" probeResult="failure" output=< Nov 21 15:15:33 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:15:33 crc kubenswrapper[4675]: > Nov 21 15:15:33 crc kubenswrapper[4675]: I1121 15:15:33.605275 4675 scope.go:117] "RemoveContainer" containerID="eb193d6f56e3a3971a63b93bb58af8a8f872699e0a009a79036e4f14232eb3fe" Nov 21 15:15:33 crc kubenswrapper[4675]: I1121 15:15:33.655349 4675 scope.go:117] "RemoveContainer" containerID="7188a339f118555cea44ffb7783754d8713f93cc4dff705e6d6d4c8b0b841bdb" Nov 21 15:15:35 crc kubenswrapper[4675]: I1121 15:15:35.460912 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:35 crc kubenswrapper[4675]: I1121 15:15:35.461513 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:36 crc kubenswrapper[4675]: I1121 15:15:36.514979 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4gx9g" podUID="6991c602-7b13-4940-b217-32cccef58f2e" containerName="registry-server" probeResult="failure" output=< Nov 21 15:15:36 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:15:36 crc kubenswrapper[4675]: > Nov 21 15:15:39 crc kubenswrapper[4675]: E1121 15:15:39.533483 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice/crio-87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce\": RecentStats: unable to find data in memory cache]" Nov 21 15:15:42 crc kubenswrapper[4675]: I1121 15:15:42.196038 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:42 crc kubenswrapper[4675]: I1121 15:15:42.248735 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:42 crc kubenswrapper[4675]: I1121 15:15:42.889634 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nkkqv" podUID="0fc855a8-0e49-4b22-ac88-f91c50706d58" containerName="registry-server" probeResult="failure" output=< Nov 21 15:15:42 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:15:42 crc kubenswrapper[4675]: > Nov 21 15:15:43 crc kubenswrapper[4675]: E1121 15:15:43.064873 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice/crio-87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice\": RecentStats: unable to find data in memory cache]" Nov 21 15:15:45 crc kubenswrapper[4675]: I1121 15:15:45.509666 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:45 crc kubenswrapper[4675]: I1121 15:15:45.571538 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:46 crc kubenswrapper[4675]: I1121 15:15:46.413251 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z5zs5"] Nov 21 15:15:46 crc kubenswrapper[4675]: I1121 15:15:46.413853 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z5zs5" podUID="fb0079f3-703d-46b4-aa87-260214373f87" containerName="registry-server" containerID="cri-o://e85ab359d6bab03ca6b4d1788662d21226112f10055d0d3d069185394dcbd730" gracePeriod=2 Nov 21 15:15:46 crc kubenswrapper[4675]: I1121 15:15:46.734987 4675 generic.go:334] "Generic (PLEG): container finished" podID="fb0079f3-703d-46b4-aa87-260214373f87" containerID="e85ab359d6bab03ca6b4d1788662d21226112f10055d0d3d069185394dcbd730" exitCode=0 Nov 21 15:15:46 crc kubenswrapper[4675]: I1121 15:15:46.735368 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5zs5" event={"ID":"fb0079f3-703d-46b4-aa87-260214373f87","Type":"ContainerDied","Data":"e85ab359d6bab03ca6b4d1788662d21226112f10055d0d3d069185394dcbd730"} Nov 21 15:15:46 crc kubenswrapper[4675]: I1121 15:15:46.955665 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.058494 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb0079f3-703d-46b4-aa87-260214373f87-utilities\") pod \"fb0079f3-703d-46b4-aa87-260214373f87\" (UID: \"fb0079f3-703d-46b4-aa87-260214373f87\") " Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.058597 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nclx\" (UniqueName: \"kubernetes.io/projected/fb0079f3-703d-46b4-aa87-260214373f87-kube-api-access-9nclx\") pod \"fb0079f3-703d-46b4-aa87-260214373f87\" (UID: \"fb0079f3-703d-46b4-aa87-260214373f87\") " Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.058783 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb0079f3-703d-46b4-aa87-260214373f87-catalog-content\") pod \"fb0079f3-703d-46b4-aa87-260214373f87\" (UID: \"fb0079f3-703d-46b4-aa87-260214373f87\") " Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.059438 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb0079f3-703d-46b4-aa87-260214373f87-utilities" (OuterVolumeSpecName: "utilities") pod "fb0079f3-703d-46b4-aa87-260214373f87" (UID: "fb0079f3-703d-46b4-aa87-260214373f87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.064652 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb0079f3-703d-46b4-aa87-260214373f87-kube-api-access-9nclx" (OuterVolumeSpecName: "kube-api-access-9nclx") pod "fb0079f3-703d-46b4-aa87-260214373f87" (UID: "fb0079f3-703d-46b4-aa87-260214373f87"). InnerVolumeSpecName "kube-api-access-9nclx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.152396 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb0079f3-703d-46b4-aa87-260214373f87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb0079f3-703d-46b4-aa87-260214373f87" (UID: "fb0079f3-703d-46b4-aa87-260214373f87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.161554 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb0079f3-703d-46b4-aa87-260214373f87-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.161599 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb0079f3-703d-46b4-aa87-260214373f87-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.161615 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nclx\" (UniqueName: \"kubernetes.io/projected/fb0079f3-703d-46b4-aa87-260214373f87-kube-api-access-9nclx\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.763169 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5zs5" event={"ID":"fb0079f3-703d-46b4-aa87-260214373f87","Type":"ContainerDied","Data":"979029d99f52ccd3c53550ed86a054b740c1a86916516ce1b5f2e601ca70cb09"} Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.763231 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5zs5" Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.764229 4675 scope.go:117] "RemoveContainer" containerID="e85ab359d6bab03ca6b4d1788662d21226112f10055d0d3d069185394dcbd730" Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.804857 4675 scope.go:117] "RemoveContainer" containerID="573ab94e74a0e2da9d97a20afcf9c9f13ed37a7c4f221d4ec8e55c21d6526299" Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.811724 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z5zs5"] Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.823204 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z5zs5"] Nov 21 15:15:47 crc kubenswrapper[4675]: I1121 15:15:47.840247 4675 scope.go:117] "RemoveContainer" containerID="e52bd2ecd0b23c6b24c3062212f27aa0a3ececad0da1f3eeeeeffd8c0355325c" Nov 21 15:15:48 crc kubenswrapper[4675]: E1121 15:15:48.266437 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice/crio-87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice\": RecentStats: unable to find data in memory cache]" Nov 21 15:15:48 crc kubenswrapper[4675]: E1121 15:15:48.266776 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice/crio-87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce\": RecentStats: unable to find data in memory cache]" Nov 21 15:15:48 crc kubenswrapper[4675]: I1121 15:15:48.866252 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb0079f3-703d-46b4-aa87-260214373f87" path="/var/lib/kubelet/pods/fb0079f3-703d-46b4-aa87-260214373f87/volumes" Nov 21 15:15:49 crc kubenswrapper[4675]: E1121 15:15:49.576905 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice/crio-87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice\": RecentStats: unable to find data in memory cache]" Nov 21 15:15:50 crc kubenswrapper[4675]: I1121 15:15:50.015152 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gx9g"] Nov 21 15:15:50 crc kubenswrapper[4675]: I1121 15:15:50.015728 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4gx9g" podUID="6991c602-7b13-4940-b217-32cccef58f2e" containerName="registry-server" containerID="cri-o://e27b431fb30729c8e39d2f55a50444fa28bea1c9d392eb4e27c6a6eabf828b79" gracePeriod=2 Nov 21 15:15:50 crc kubenswrapper[4675]: I1121 15:15:50.807578 4675 generic.go:334] "Generic (PLEG): container finished" podID="6991c602-7b13-4940-b217-32cccef58f2e" containerID="e27b431fb30729c8e39d2f55a50444fa28bea1c9d392eb4e27c6a6eabf828b79" exitCode=0 Nov 21 15:15:50 crc kubenswrapper[4675]: I1121 15:15:50.807635 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gx9g" event={"ID":"6991c602-7b13-4940-b217-32cccef58f2e","Type":"ContainerDied","Data":"e27b431fb30729c8e39d2f55a50444fa28bea1c9d392eb4e27c6a6eabf828b79"} Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.180938 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.357932 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6991c602-7b13-4940-b217-32cccef58f2e-utilities\") pod \"6991c602-7b13-4940-b217-32cccef58f2e\" (UID: \"6991c602-7b13-4940-b217-32cccef58f2e\") " Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.358200 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x88s5\" (UniqueName: \"kubernetes.io/projected/6991c602-7b13-4940-b217-32cccef58f2e-kube-api-access-x88s5\") pod \"6991c602-7b13-4940-b217-32cccef58f2e\" (UID: \"6991c602-7b13-4940-b217-32cccef58f2e\") " Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.358358 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6991c602-7b13-4940-b217-32cccef58f2e-catalog-content\") pod \"6991c602-7b13-4940-b217-32cccef58f2e\" (UID: \"6991c602-7b13-4940-b217-32cccef58f2e\") " Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.359146 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6991c602-7b13-4940-b217-32cccef58f2e-utilities" (OuterVolumeSpecName: "utilities") pod "6991c602-7b13-4940-b217-32cccef58f2e" (UID: "6991c602-7b13-4940-b217-32cccef58f2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.359477 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6991c602-7b13-4940-b217-32cccef58f2e-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.364864 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6991c602-7b13-4940-b217-32cccef58f2e-kube-api-access-x88s5" (OuterVolumeSpecName: "kube-api-access-x88s5") pod "6991c602-7b13-4940-b217-32cccef58f2e" (UID: "6991c602-7b13-4940-b217-32cccef58f2e"). InnerVolumeSpecName "kube-api-access-x88s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.383856 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6991c602-7b13-4940-b217-32cccef58f2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6991c602-7b13-4940-b217-32cccef58f2e" (UID: "6991c602-7b13-4940-b217-32cccef58f2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.461944 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x88s5\" (UniqueName: \"kubernetes.io/projected/6991c602-7b13-4940-b217-32cccef58f2e-kube-api-access-x88s5\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.461985 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6991c602-7b13-4940-b217-32cccef58f2e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.821005 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gx9g" event={"ID":"6991c602-7b13-4940-b217-32cccef58f2e","Type":"ContainerDied","Data":"816166aad301b23337364561e60a8873d70de652e2a19b4484f3a9e8922a16ea"} Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.821336 4675 scope.go:117] "RemoveContainer" containerID="e27b431fb30729c8e39d2f55a50444fa28bea1c9d392eb4e27c6a6eabf828b79" Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.821100 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gx9g" Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.862374 4675 scope.go:117] "RemoveContainer" containerID="8d563ce688b724662dc764a1e2e233d062459541651b00f74185b1d70f2d0ec4" Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.872100 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gx9g"] Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.886363 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gx9g"] Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.894632 4675 scope.go:117] "RemoveContainer" containerID="ef75d8d3c89261cd1bdd261537113c2ddd736e92376130a535c0a0ee9b68635b" Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.900528 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:51 crc kubenswrapper[4675]: I1121 15:15:51.955249 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:52 crc kubenswrapper[4675]: I1121 15:15:52.862744 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6991c602-7b13-4940-b217-32cccef58f2e" path="/var/lib/kubelet/pods/6991c602-7b13-4940-b217-32cccef58f2e/volumes" Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.215695 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nkkqv"] Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.217734 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nkkqv" podUID="0fc855a8-0e49-4b22-ac88-f91c50706d58" containerName="registry-server" containerID="cri-o://1f771fceefa65031c9774239a1cde0e2eae13524fc2a67d50dfbe0453b6d437a" gracePeriod=2 Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.723996 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.866775 4675 generic.go:334] "Generic (PLEG): container finished" podID="0fc855a8-0e49-4b22-ac88-f91c50706d58" containerID="1f771fceefa65031c9774239a1cde0e2eae13524fc2a67d50dfbe0453b6d437a" exitCode=0 Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.866842 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkkqv" Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.866838 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkkqv" event={"ID":"0fc855a8-0e49-4b22-ac88-f91c50706d58","Type":"ContainerDied","Data":"1f771fceefa65031c9774239a1cde0e2eae13524fc2a67d50dfbe0453b6d437a"} Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.867259 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkkqv" event={"ID":"0fc855a8-0e49-4b22-ac88-f91c50706d58","Type":"ContainerDied","Data":"3d26a994ecc662f69340306027a9d991e08f22f70e64fd37a4b9de5a8ec3ec3f"} Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.867289 4675 scope.go:117] "RemoveContainer" containerID="1f771fceefa65031c9774239a1cde0e2eae13524fc2a67d50dfbe0453b6d437a" Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.867934 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pfrj\" (UniqueName: \"kubernetes.io/projected/0fc855a8-0e49-4b22-ac88-f91c50706d58-kube-api-access-2pfrj\") pod \"0fc855a8-0e49-4b22-ac88-f91c50706d58\" (UID: \"0fc855a8-0e49-4b22-ac88-f91c50706d58\") " Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.868082 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc855a8-0e49-4b22-ac88-f91c50706d58-utilities\") pod \"0fc855a8-0e49-4b22-ac88-f91c50706d58\" (UID: \"0fc855a8-0e49-4b22-ac88-f91c50706d58\") " Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.868414 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc855a8-0e49-4b22-ac88-f91c50706d58-catalog-content\") pod \"0fc855a8-0e49-4b22-ac88-f91c50706d58\" (UID: \"0fc855a8-0e49-4b22-ac88-f91c50706d58\") " Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.869149 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fc855a8-0e49-4b22-ac88-f91c50706d58-utilities" (OuterVolumeSpecName: "utilities") pod "0fc855a8-0e49-4b22-ac88-f91c50706d58" (UID: "0fc855a8-0e49-4b22-ac88-f91c50706d58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.875772 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fc855a8-0e49-4b22-ac88-f91c50706d58-kube-api-access-2pfrj" (OuterVolumeSpecName: "kube-api-access-2pfrj") pod "0fc855a8-0e49-4b22-ac88-f91c50706d58" (UID: "0fc855a8-0e49-4b22-ac88-f91c50706d58"). InnerVolumeSpecName "kube-api-access-2pfrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.889522 4675 scope.go:117] "RemoveContainer" containerID="0a30b294c5b48c8bde2e9d4e96d62b2fe9f0fbe4d6881e76935b5f5e1310b8e5" Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.916850 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fc855a8-0e49-4b22-ac88-f91c50706d58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fc855a8-0e49-4b22-ac88-f91c50706d58" (UID: "0fc855a8-0e49-4b22-ac88-f91c50706d58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.953894 4675 scope.go:117] "RemoveContainer" containerID="df7781a383c6fdf82576098bd557cabe670c8ae08751cdb2c4c493f7f516be5b" Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.970995 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc855a8-0e49-4b22-ac88-f91c50706d58-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.971243 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc855a8-0e49-4b22-ac88-f91c50706d58-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:55 crc kubenswrapper[4675]: I1121 15:15:55.971264 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pfrj\" (UniqueName: \"kubernetes.io/projected/0fc855a8-0e49-4b22-ac88-f91c50706d58-kube-api-access-2pfrj\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:56 crc kubenswrapper[4675]: I1121 15:15:56.006282 4675 scope.go:117] "RemoveContainer" containerID="1f771fceefa65031c9774239a1cde0e2eae13524fc2a67d50dfbe0453b6d437a" Nov 21 15:15:56 crc kubenswrapper[4675]: E1121 15:15:56.006700 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f771fceefa65031c9774239a1cde0e2eae13524fc2a67d50dfbe0453b6d437a\": container with ID starting with 1f771fceefa65031c9774239a1cde0e2eae13524fc2a67d50dfbe0453b6d437a not found: ID does not exist" containerID="1f771fceefa65031c9774239a1cde0e2eae13524fc2a67d50dfbe0453b6d437a" Nov 21 15:15:56 crc kubenswrapper[4675]: I1121 15:15:56.006814 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f771fceefa65031c9774239a1cde0e2eae13524fc2a67d50dfbe0453b6d437a"} err="failed to get container status \"1f771fceefa65031c9774239a1cde0e2eae13524fc2a67d50dfbe0453b6d437a\": rpc error: code = NotFound desc = could not find container \"1f771fceefa65031c9774239a1cde0e2eae13524fc2a67d50dfbe0453b6d437a\": container with ID starting with 1f771fceefa65031c9774239a1cde0e2eae13524fc2a67d50dfbe0453b6d437a not found: ID does not exist" Nov 21 15:15:56 crc kubenswrapper[4675]: I1121 15:15:56.006909 4675 scope.go:117] "RemoveContainer" containerID="0a30b294c5b48c8bde2e9d4e96d62b2fe9f0fbe4d6881e76935b5f5e1310b8e5" Nov 21 15:15:56 crc kubenswrapper[4675]: E1121 15:15:56.007393 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a30b294c5b48c8bde2e9d4e96d62b2fe9f0fbe4d6881e76935b5f5e1310b8e5\": container with ID starting with 0a30b294c5b48c8bde2e9d4e96d62b2fe9f0fbe4d6881e76935b5f5e1310b8e5 not found: ID does not exist" containerID="0a30b294c5b48c8bde2e9d4e96d62b2fe9f0fbe4d6881e76935b5f5e1310b8e5" Nov 21 15:15:56 crc kubenswrapper[4675]: I1121 15:15:56.007433 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a30b294c5b48c8bde2e9d4e96d62b2fe9f0fbe4d6881e76935b5f5e1310b8e5"} err="failed to get container status \"0a30b294c5b48c8bde2e9d4e96d62b2fe9f0fbe4d6881e76935b5f5e1310b8e5\": rpc error: code = NotFound desc = could not find container \"0a30b294c5b48c8bde2e9d4e96d62b2fe9f0fbe4d6881e76935b5f5e1310b8e5\": container with ID starting with 0a30b294c5b48c8bde2e9d4e96d62b2fe9f0fbe4d6881e76935b5f5e1310b8e5 not found: ID does not exist" Nov 21 15:15:56 crc kubenswrapper[4675]: I1121 15:15:56.007457 4675 scope.go:117] "RemoveContainer" containerID="df7781a383c6fdf82576098bd557cabe670c8ae08751cdb2c4c493f7f516be5b" Nov 21 15:15:56 crc kubenswrapper[4675]: E1121 15:15:56.007672 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df7781a383c6fdf82576098bd557cabe670c8ae08751cdb2c4c493f7f516be5b\": container with ID starting with df7781a383c6fdf82576098bd557cabe670c8ae08751cdb2c4c493f7f516be5b not found: ID does not exist" containerID="df7781a383c6fdf82576098bd557cabe670c8ae08751cdb2c4c493f7f516be5b" Nov 21 15:15:56 crc kubenswrapper[4675]: I1121 15:15:56.007705 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df7781a383c6fdf82576098bd557cabe670c8ae08751cdb2c4c493f7f516be5b"} err="failed to get container status \"df7781a383c6fdf82576098bd557cabe670c8ae08751cdb2c4c493f7f516be5b\": rpc error: code = NotFound desc = could not find container \"df7781a383c6fdf82576098bd557cabe670c8ae08751cdb2c4c493f7f516be5b\": container with ID starting with df7781a383c6fdf82576098bd557cabe670c8ae08751cdb2c4c493f7f516be5b not found: ID does not exist" Nov 21 15:15:56 crc kubenswrapper[4675]: I1121 15:15:56.208452 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nkkqv"] Nov 21 15:15:56 crc kubenswrapper[4675]: I1121 15:15:56.219418 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nkkqv"] Nov 21 15:15:56 crc kubenswrapper[4675]: I1121 15:15:56.868289 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fc855a8-0e49-4b22-ac88-f91c50706d58" path="/var/lib/kubelet/pods/0fc855a8-0e49-4b22-ac88-f91c50706d58/volumes" Nov 21 15:15:58 crc kubenswrapper[4675]: E1121 15:15:58.316497 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice/crio-87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice\": RecentStats: unable to find data in memory cache]" Nov 21 15:15:59 crc kubenswrapper[4675]: E1121 15:15:59.619209 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice/crio-87bd05d9fa5b9eb1dba9386a8c0d578cb6f4b7d1d9502a9cbb90d37ce8ad53ce\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f48ca6_f535_4541_af51_ebfb3942a8ea.slice\": RecentStats: unable to find data in memory cache]" Nov 21 15:16:46 crc kubenswrapper[4675]: I1121 15:16:46.136944 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:16:46 crc kubenswrapper[4675]: I1121 15:16:46.137643 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.364308 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zj4b8"] Nov 21 15:16:54 crc kubenswrapper[4675]: E1121 15:16:54.365513 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6991c602-7b13-4940-b217-32cccef58f2e" containerName="extract-content" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.365532 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6991c602-7b13-4940-b217-32cccef58f2e" containerName="extract-content" Nov 21 15:16:54 crc kubenswrapper[4675]: E1121 15:16:54.365555 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0079f3-703d-46b4-aa87-260214373f87" containerName="registry-server" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.365565 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0079f3-703d-46b4-aa87-260214373f87" containerName="registry-server" Nov 21 15:16:54 crc kubenswrapper[4675]: E1121 15:16:54.365587 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc855a8-0e49-4b22-ac88-f91c50706d58" containerName="extract-content" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.365595 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc855a8-0e49-4b22-ac88-f91c50706d58" containerName="extract-content" Nov 21 15:16:54 crc kubenswrapper[4675]: E1121 15:16:54.365621 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc855a8-0e49-4b22-ac88-f91c50706d58" containerName="registry-server" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.365629 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc855a8-0e49-4b22-ac88-f91c50706d58" containerName="registry-server" Nov 21 15:16:54 crc kubenswrapper[4675]: E1121 15:16:54.365642 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6991c602-7b13-4940-b217-32cccef58f2e" containerName="registry-server" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.365650 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6991c602-7b13-4940-b217-32cccef58f2e" containerName="registry-server" Nov 21 15:16:54 crc kubenswrapper[4675]: E1121 15:16:54.365679 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6991c602-7b13-4940-b217-32cccef58f2e" containerName="extract-utilities" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.365688 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6991c602-7b13-4940-b217-32cccef58f2e" containerName="extract-utilities" Nov 21 15:16:54 crc kubenswrapper[4675]: E1121 15:16:54.365704 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0079f3-703d-46b4-aa87-260214373f87" containerName="extract-utilities" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.365712 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0079f3-703d-46b4-aa87-260214373f87" containerName="extract-utilities" Nov 21 15:16:54 crc kubenswrapper[4675]: E1121 15:16:54.365746 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0079f3-703d-46b4-aa87-260214373f87" containerName="extract-content" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.365754 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0079f3-703d-46b4-aa87-260214373f87" containerName="extract-content" Nov 21 15:16:54 crc kubenswrapper[4675]: E1121 15:16:54.365765 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc855a8-0e49-4b22-ac88-f91c50706d58" containerName="extract-utilities" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.365773 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc855a8-0e49-4b22-ac88-f91c50706d58" containerName="extract-utilities" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.375470 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6991c602-7b13-4940-b217-32cccef58f2e" containerName="registry-server" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.375525 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb0079f3-703d-46b4-aa87-260214373f87" containerName="registry-server" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.375535 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fc855a8-0e49-4b22-ac88-f91c50706d58" containerName="registry-server" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.377658 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zj4b8"] Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.377834 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.466835 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcfca86-40ab-40c7-9556-408ba56fb5af-catalog-content\") pod \"redhat-operators-zj4b8\" (UID: \"edcfca86-40ab-40c7-9556-408ba56fb5af\") " pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.467147 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h6ss\" (UniqueName: \"kubernetes.io/projected/edcfca86-40ab-40c7-9556-408ba56fb5af-kube-api-access-8h6ss\") pod \"redhat-operators-zj4b8\" (UID: \"edcfca86-40ab-40c7-9556-408ba56fb5af\") " pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.467287 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcfca86-40ab-40c7-9556-408ba56fb5af-utilities\") pod \"redhat-operators-zj4b8\" (UID: \"edcfca86-40ab-40c7-9556-408ba56fb5af\") " pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.570326 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h6ss\" (UniqueName: \"kubernetes.io/projected/edcfca86-40ab-40c7-9556-408ba56fb5af-kube-api-access-8h6ss\") pod \"redhat-operators-zj4b8\" (UID: \"edcfca86-40ab-40c7-9556-408ba56fb5af\") " pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.570425 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcfca86-40ab-40c7-9556-408ba56fb5af-utilities\") pod \"redhat-operators-zj4b8\" (UID: \"edcfca86-40ab-40c7-9556-408ba56fb5af\") " pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.570589 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcfca86-40ab-40c7-9556-408ba56fb5af-catalog-content\") pod \"redhat-operators-zj4b8\" (UID: \"edcfca86-40ab-40c7-9556-408ba56fb5af\") " pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.571278 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcfca86-40ab-40c7-9556-408ba56fb5af-catalog-content\") pod \"redhat-operators-zj4b8\" (UID: \"edcfca86-40ab-40c7-9556-408ba56fb5af\") " pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.571460 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcfca86-40ab-40c7-9556-408ba56fb5af-utilities\") pod \"redhat-operators-zj4b8\" (UID: \"edcfca86-40ab-40c7-9556-408ba56fb5af\") " pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.593674 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h6ss\" (UniqueName: \"kubernetes.io/projected/edcfca86-40ab-40c7-9556-408ba56fb5af-kube-api-access-8h6ss\") pod \"redhat-operators-zj4b8\" (UID: \"edcfca86-40ab-40c7-9556-408ba56fb5af\") " pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:16:54 crc kubenswrapper[4675]: I1121 15:16:54.716385 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:16:55 crc kubenswrapper[4675]: I1121 15:16:55.248486 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zj4b8"] Nov 21 15:16:55 crc kubenswrapper[4675]: I1121 15:16:55.541517 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj4b8" event={"ID":"edcfca86-40ab-40c7-9556-408ba56fb5af","Type":"ContainerStarted","Data":"5588f2671f638569804ccab8aa0fef36377634dea3fdefa3f365c94c29239ca3"} Nov 21 15:16:55 crc kubenswrapper[4675]: I1121 15:16:55.541837 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj4b8" event={"ID":"edcfca86-40ab-40c7-9556-408ba56fb5af","Type":"ContainerStarted","Data":"bc5455697cce481551422852106c3c01ca81e520d402bbaf0a22aa903ae165db"} Nov 21 15:16:56 crc kubenswrapper[4675]: I1121 15:16:56.556285 4675 generic.go:334] "Generic (PLEG): container finished" podID="edcfca86-40ab-40c7-9556-408ba56fb5af" containerID="5588f2671f638569804ccab8aa0fef36377634dea3fdefa3f365c94c29239ca3" exitCode=0 Nov 21 15:16:56 crc kubenswrapper[4675]: I1121 15:16:56.556434 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj4b8" event={"ID":"edcfca86-40ab-40c7-9556-408ba56fb5af","Type":"ContainerDied","Data":"5588f2671f638569804ccab8aa0fef36377634dea3fdefa3f365c94c29239ca3"} Nov 21 15:16:59 crc kubenswrapper[4675]: I1121 15:16:59.594039 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj4b8" event={"ID":"edcfca86-40ab-40c7-9556-408ba56fb5af","Type":"ContainerStarted","Data":"547cba5d78f1aa68c81a3d50c3f9e90d965f7dbd751eb3b9e0f1ca27541c0949"} Nov 21 15:17:07 crc kubenswrapper[4675]: I1121 15:17:07.688577 4675 generic.go:334] "Generic (PLEG): container finished" podID="edcfca86-40ab-40c7-9556-408ba56fb5af" containerID="547cba5d78f1aa68c81a3d50c3f9e90d965f7dbd751eb3b9e0f1ca27541c0949" exitCode=0 Nov 21 15:17:07 crc kubenswrapper[4675]: I1121 15:17:07.689156 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj4b8" event={"ID":"edcfca86-40ab-40c7-9556-408ba56fb5af","Type":"ContainerDied","Data":"547cba5d78f1aa68c81a3d50c3f9e90d965f7dbd751eb3b9e0f1ca27541c0949"} Nov 21 15:17:08 crc kubenswrapper[4675]: I1121 15:17:08.704875 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj4b8" event={"ID":"edcfca86-40ab-40c7-9556-408ba56fb5af","Type":"ContainerStarted","Data":"3265d91ea634862475af0ca9529473382c8f9da03247a556404c3db95889e0a6"} Nov 21 15:17:08 crc kubenswrapper[4675]: I1121 15:17:08.727735 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zj4b8" podStartSLOduration=3.027542093 podStartE2EDuration="14.727716484s" podCreationTimestamp="2025-11-21 15:16:54 +0000 UTC" firstStartedPulling="2025-11-21 15:16:56.559631438 +0000 UTC m=+6293.286046205" lastFinishedPulling="2025-11-21 15:17:08.259805859 +0000 UTC m=+6304.986220596" observedRunningTime="2025-11-21 15:17:08.720672949 +0000 UTC m=+6305.447087676" watchObservedRunningTime="2025-11-21 15:17:08.727716484 +0000 UTC m=+6305.454131211" Nov 21 15:17:14 crc kubenswrapper[4675]: I1121 15:17:14.717538 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:17:14 crc kubenswrapper[4675]: I1121 15:17:14.718786 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:17:15 crc kubenswrapper[4675]: I1121 15:17:15.767511 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zj4b8" podUID="edcfca86-40ab-40c7-9556-408ba56fb5af" containerName="registry-server" probeResult="failure" output=< Nov 21 15:17:15 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:17:15 crc kubenswrapper[4675]: > Nov 21 15:17:16 crc kubenswrapper[4675]: I1121 15:17:16.136106 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:17:16 crc kubenswrapper[4675]: I1121 15:17:16.136175 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:17:25 crc kubenswrapper[4675]: I1121 15:17:25.771027 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zj4b8" podUID="edcfca86-40ab-40c7-9556-408ba56fb5af" containerName="registry-server" probeResult="failure" output=< Nov 21 15:17:25 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:17:25 crc kubenswrapper[4675]: > Nov 21 15:17:35 crc kubenswrapper[4675]: I1121 15:17:35.762123 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zj4b8" podUID="edcfca86-40ab-40c7-9556-408ba56fb5af" containerName="registry-server" probeResult="failure" output=< Nov 21 15:17:35 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:17:35 crc kubenswrapper[4675]: > Nov 21 15:17:44 crc kubenswrapper[4675]: I1121 15:17:44.767980 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:17:44 crc kubenswrapper[4675]: I1121 15:17:44.819142 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:17:45 crc kubenswrapper[4675]: I1121 15:17:45.008884 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zj4b8"] Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.128838 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zj4b8" podUID="edcfca86-40ab-40c7-9556-408ba56fb5af" containerName="registry-server" containerID="cri-o://3265d91ea634862475af0ca9529473382c8f9da03247a556404c3db95889e0a6" gracePeriod=2 Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.135879 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.135928 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.135964 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.136793 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4a54f482157d9a82e4690ff9119779cc0b9536c7b965dffc48df0e2a1cda567"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.136850 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://f4a54f482157d9a82e4690ff9119779cc0b9536c7b965dffc48df0e2a1cda567" gracePeriod=600 Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.649332 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.809988 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h6ss\" (UniqueName: \"kubernetes.io/projected/edcfca86-40ab-40c7-9556-408ba56fb5af-kube-api-access-8h6ss\") pod \"edcfca86-40ab-40c7-9556-408ba56fb5af\" (UID: \"edcfca86-40ab-40c7-9556-408ba56fb5af\") " Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.810038 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcfca86-40ab-40c7-9556-408ba56fb5af-utilities\") pod \"edcfca86-40ab-40c7-9556-408ba56fb5af\" (UID: \"edcfca86-40ab-40c7-9556-408ba56fb5af\") " Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.810362 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcfca86-40ab-40c7-9556-408ba56fb5af-catalog-content\") pod \"edcfca86-40ab-40c7-9556-408ba56fb5af\" (UID: \"edcfca86-40ab-40c7-9556-408ba56fb5af\") " Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.811366 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edcfca86-40ab-40c7-9556-408ba56fb5af-utilities" (OuterVolumeSpecName: "utilities") pod "edcfca86-40ab-40c7-9556-408ba56fb5af" (UID: "edcfca86-40ab-40c7-9556-408ba56fb5af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.816696 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edcfca86-40ab-40c7-9556-408ba56fb5af-kube-api-access-8h6ss" (OuterVolumeSpecName: "kube-api-access-8h6ss") pod "edcfca86-40ab-40c7-9556-408ba56fb5af" (UID: "edcfca86-40ab-40c7-9556-408ba56fb5af"). InnerVolumeSpecName "kube-api-access-8h6ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.914613 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edcfca86-40ab-40c7-9556-408ba56fb5af-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.914710 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h6ss\" (UniqueName: \"kubernetes.io/projected/edcfca86-40ab-40c7-9556-408ba56fb5af-kube-api-access-8h6ss\") on node \"crc\" DevicePath \"\"" Nov 21 15:17:46 crc kubenswrapper[4675]: I1121 15:17:46.918457 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edcfca86-40ab-40c7-9556-408ba56fb5af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edcfca86-40ab-40c7-9556-408ba56fb5af" (UID: "edcfca86-40ab-40c7-9556-408ba56fb5af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.018648 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edcfca86-40ab-40c7-9556-408ba56fb5af-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.142852 4675 generic.go:334] "Generic (PLEG): container finished" podID="edcfca86-40ab-40c7-9556-408ba56fb5af" containerID="3265d91ea634862475af0ca9529473382c8f9da03247a556404c3db95889e0a6" exitCode=0 Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.142923 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zj4b8" Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.143052 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj4b8" event={"ID":"edcfca86-40ab-40c7-9556-408ba56fb5af","Type":"ContainerDied","Data":"3265d91ea634862475af0ca9529473382c8f9da03247a556404c3db95889e0a6"} Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.143183 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj4b8" event={"ID":"edcfca86-40ab-40c7-9556-408ba56fb5af","Type":"ContainerDied","Data":"bc5455697cce481551422852106c3c01ca81e520d402bbaf0a22aa903ae165db"} Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.143215 4675 scope.go:117] "RemoveContainer" containerID="3265d91ea634862475af0ca9529473382c8f9da03247a556404c3db95889e0a6" Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.147942 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="f4a54f482157d9a82e4690ff9119779cc0b9536c7b965dffc48df0e2a1cda567" exitCode=0 Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.147990 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"f4a54f482157d9a82e4690ff9119779cc0b9536c7b965dffc48df0e2a1cda567"} Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.148029 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf"} Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.171935 4675 scope.go:117] "RemoveContainer" containerID="547cba5d78f1aa68c81a3d50c3f9e90d965f7dbd751eb3b9e0f1ca27541c0949" Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.194553 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zj4b8"] Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.204172 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zj4b8"] Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.217028 4675 scope.go:117] "RemoveContainer" containerID="5588f2671f638569804ccab8aa0fef36377634dea3fdefa3f365c94c29239ca3" Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.237701 4675 scope.go:117] "RemoveContainer" containerID="3265d91ea634862475af0ca9529473382c8f9da03247a556404c3db95889e0a6" Nov 21 15:17:47 crc kubenswrapper[4675]: E1121 15:17:47.238167 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3265d91ea634862475af0ca9529473382c8f9da03247a556404c3db95889e0a6\": container with ID starting with 3265d91ea634862475af0ca9529473382c8f9da03247a556404c3db95889e0a6 not found: ID does not exist" containerID="3265d91ea634862475af0ca9529473382c8f9da03247a556404c3db95889e0a6" Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.238233 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3265d91ea634862475af0ca9529473382c8f9da03247a556404c3db95889e0a6"} err="failed to get container status \"3265d91ea634862475af0ca9529473382c8f9da03247a556404c3db95889e0a6\": rpc error: code = NotFound desc = could not find container \"3265d91ea634862475af0ca9529473382c8f9da03247a556404c3db95889e0a6\": container with ID starting with 3265d91ea634862475af0ca9529473382c8f9da03247a556404c3db95889e0a6 not found: ID does not exist" Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.238269 4675 scope.go:117] "RemoveContainer" containerID="547cba5d78f1aa68c81a3d50c3f9e90d965f7dbd751eb3b9e0f1ca27541c0949" Nov 21 15:17:47 crc kubenswrapper[4675]: E1121 15:17:47.238636 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"547cba5d78f1aa68c81a3d50c3f9e90d965f7dbd751eb3b9e0f1ca27541c0949\": container with ID starting with 547cba5d78f1aa68c81a3d50c3f9e90d965f7dbd751eb3b9e0f1ca27541c0949 not found: ID does not exist" containerID="547cba5d78f1aa68c81a3d50c3f9e90d965f7dbd751eb3b9e0f1ca27541c0949" Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.238673 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"547cba5d78f1aa68c81a3d50c3f9e90d965f7dbd751eb3b9e0f1ca27541c0949"} err="failed to get container status \"547cba5d78f1aa68c81a3d50c3f9e90d965f7dbd751eb3b9e0f1ca27541c0949\": rpc error: code = NotFound desc = could not find container \"547cba5d78f1aa68c81a3d50c3f9e90d965f7dbd751eb3b9e0f1ca27541c0949\": container with ID starting with 547cba5d78f1aa68c81a3d50c3f9e90d965f7dbd751eb3b9e0f1ca27541c0949 not found: ID does not exist" Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.238706 4675 scope.go:117] "RemoveContainer" containerID="5588f2671f638569804ccab8aa0fef36377634dea3fdefa3f365c94c29239ca3" Nov 21 15:17:47 crc kubenswrapper[4675]: E1121 15:17:47.239100 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5588f2671f638569804ccab8aa0fef36377634dea3fdefa3f365c94c29239ca3\": container with ID starting with 5588f2671f638569804ccab8aa0fef36377634dea3fdefa3f365c94c29239ca3 not found: ID does not exist" containerID="5588f2671f638569804ccab8aa0fef36377634dea3fdefa3f365c94c29239ca3" Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.239133 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5588f2671f638569804ccab8aa0fef36377634dea3fdefa3f365c94c29239ca3"} err="failed to get container status \"5588f2671f638569804ccab8aa0fef36377634dea3fdefa3f365c94c29239ca3\": rpc error: code = NotFound desc = could not find container \"5588f2671f638569804ccab8aa0fef36377634dea3fdefa3f365c94c29239ca3\": container with ID starting with 5588f2671f638569804ccab8aa0fef36377634dea3fdefa3f365c94c29239ca3 not found: ID does not exist" Nov 21 15:17:47 crc kubenswrapper[4675]: I1121 15:17:47.239153 4675 scope.go:117] "RemoveContainer" containerID="ad37d8b889023e0c7a115f668717c6abc3625284fc913af46cbb13436a12f752" Nov 21 15:17:48 crc kubenswrapper[4675]: I1121 15:17:48.862989 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edcfca86-40ab-40c7-9556-408ba56fb5af" path="/var/lib/kubelet/pods/edcfca86-40ab-40c7-9556-408ba56fb5af/volumes" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.441397 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-57qhv/must-gather-lr6sm"] Nov 21 15:18:01 crc kubenswrapper[4675]: E1121 15:18:01.442550 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcfca86-40ab-40c7-9556-408ba56fb5af" containerName="extract-utilities" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.442568 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcfca86-40ab-40c7-9556-408ba56fb5af" containerName="extract-utilities" Nov 21 15:18:01 crc kubenswrapper[4675]: E1121 15:18:01.442606 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcfca86-40ab-40c7-9556-408ba56fb5af" containerName="registry-server" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.442613 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcfca86-40ab-40c7-9556-408ba56fb5af" containerName="registry-server" Nov 21 15:18:01 crc kubenswrapper[4675]: E1121 15:18:01.442631 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcfca86-40ab-40c7-9556-408ba56fb5af" containerName="extract-content" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.442640 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcfca86-40ab-40c7-9556-408ba56fb5af" containerName="extract-content" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.442856 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="edcfca86-40ab-40c7-9556-408ba56fb5af" containerName="registry-server" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.450974 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/must-gather-lr6sm" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.461280 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-57qhv"/"default-dockercfg-rbjd6" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.461503 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-57qhv"/"openshift-service-ca.crt" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.462324 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-57qhv"/"kube-root-ca.crt" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.473928 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-57qhv/must-gather-lr6sm"] Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.584866 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x77j7\" (UniqueName: \"kubernetes.io/projected/e596a80c-ed6e-43ca-9936-964085f6614e-kube-api-access-x77j7\") pod \"must-gather-lr6sm\" (UID: \"e596a80c-ed6e-43ca-9936-964085f6614e\") " pod="openshift-must-gather-57qhv/must-gather-lr6sm" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.585198 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e596a80c-ed6e-43ca-9936-964085f6614e-must-gather-output\") pod \"must-gather-lr6sm\" (UID: \"e596a80c-ed6e-43ca-9936-964085f6614e\") " pod="openshift-must-gather-57qhv/must-gather-lr6sm" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.687919 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x77j7\" (UniqueName: \"kubernetes.io/projected/e596a80c-ed6e-43ca-9936-964085f6614e-kube-api-access-x77j7\") pod \"must-gather-lr6sm\" (UID: \"e596a80c-ed6e-43ca-9936-964085f6614e\") " pod="openshift-must-gather-57qhv/must-gather-lr6sm" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.688015 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e596a80c-ed6e-43ca-9936-964085f6614e-must-gather-output\") pod \"must-gather-lr6sm\" (UID: \"e596a80c-ed6e-43ca-9936-964085f6614e\") " pod="openshift-must-gather-57qhv/must-gather-lr6sm" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.688648 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e596a80c-ed6e-43ca-9936-964085f6614e-must-gather-output\") pod \"must-gather-lr6sm\" (UID: \"e596a80c-ed6e-43ca-9936-964085f6614e\") " pod="openshift-must-gather-57qhv/must-gather-lr6sm" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.725884 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x77j7\" (UniqueName: \"kubernetes.io/projected/e596a80c-ed6e-43ca-9936-964085f6614e-kube-api-access-x77j7\") pod \"must-gather-lr6sm\" (UID: \"e596a80c-ed6e-43ca-9936-964085f6614e\") " pod="openshift-must-gather-57qhv/must-gather-lr6sm" Nov 21 15:18:01 crc kubenswrapper[4675]: I1121 15:18:01.788303 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/must-gather-lr6sm" Nov 21 15:18:02 crc kubenswrapper[4675]: I1121 15:18:02.328204 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-57qhv/must-gather-lr6sm"] Nov 21 15:18:03 crc kubenswrapper[4675]: I1121 15:18:03.342431 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57qhv/must-gather-lr6sm" event={"ID":"e596a80c-ed6e-43ca-9936-964085f6614e","Type":"ContainerStarted","Data":"c2c065d0709c70e279f1e8d328b448b6028d8c5fa3235d9db02543f6fda1a3c4"} Nov 21 15:18:03 crc kubenswrapper[4675]: I1121 15:18:03.342968 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57qhv/must-gather-lr6sm" event={"ID":"e596a80c-ed6e-43ca-9936-964085f6614e","Type":"ContainerStarted","Data":"bda1d5b1756ecbb53908eb8d8cda945d4c8521f3148ac5b83205366631721279"} Nov 21 15:18:03 crc kubenswrapper[4675]: I1121 15:18:03.342979 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57qhv/must-gather-lr6sm" event={"ID":"e596a80c-ed6e-43ca-9936-964085f6614e","Type":"ContainerStarted","Data":"1cadd471f73051d69a5fe2bfcdc67f55024089d67c617962a3a35799d602ff02"} Nov 21 15:18:03 crc kubenswrapper[4675]: I1121 15:18:03.366534 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-57qhv/must-gather-lr6sm" podStartSLOduration=2.366507806 podStartE2EDuration="2.366507806s" podCreationTimestamp="2025-11-21 15:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:18:03.358799824 +0000 UTC m=+6360.085214541" watchObservedRunningTime="2025-11-21 15:18:03.366507806 +0000 UTC m=+6360.092922553" Nov 21 15:18:06 crc kubenswrapper[4675]: I1121 15:18:06.765685 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-57qhv/crc-debug-wwvwt"] Nov 21 15:18:06 crc kubenswrapper[4675]: I1121 15:18:06.768118 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/crc-debug-wwvwt" Nov 21 15:18:06 crc kubenswrapper[4675]: I1121 15:18:06.820456 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjljn\" (UniqueName: \"kubernetes.io/projected/eaf3a251-9f75-4235-9693-15c71a77fe48-kube-api-access-qjljn\") pod \"crc-debug-wwvwt\" (UID: \"eaf3a251-9f75-4235-9693-15c71a77fe48\") " pod="openshift-must-gather-57qhv/crc-debug-wwvwt" Nov 21 15:18:06 crc kubenswrapper[4675]: I1121 15:18:06.820518 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eaf3a251-9f75-4235-9693-15c71a77fe48-host\") pod \"crc-debug-wwvwt\" (UID: \"eaf3a251-9f75-4235-9693-15c71a77fe48\") " pod="openshift-must-gather-57qhv/crc-debug-wwvwt" Nov 21 15:18:06 crc kubenswrapper[4675]: I1121 15:18:06.923469 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjljn\" (UniqueName: \"kubernetes.io/projected/eaf3a251-9f75-4235-9693-15c71a77fe48-kube-api-access-qjljn\") pod \"crc-debug-wwvwt\" (UID: \"eaf3a251-9f75-4235-9693-15c71a77fe48\") " pod="openshift-must-gather-57qhv/crc-debug-wwvwt" Nov 21 15:18:06 crc kubenswrapper[4675]: I1121 15:18:06.923527 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eaf3a251-9f75-4235-9693-15c71a77fe48-host\") pod \"crc-debug-wwvwt\" (UID: \"eaf3a251-9f75-4235-9693-15c71a77fe48\") " pod="openshift-must-gather-57qhv/crc-debug-wwvwt" Nov 21 15:18:06 crc kubenswrapper[4675]: I1121 15:18:06.923667 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eaf3a251-9f75-4235-9693-15c71a77fe48-host\") pod \"crc-debug-wwvwt\" (UID: \"eaf3a251-9f75-4235-9693-15c71a77fe48\") " pod="openshift-must-gather-57qhv/crc-debug-wwvwt" Nov 21 15:18:06 crc kubenswrapper[4675]: I1121 15:18:06.943475 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjljn\" (UniqueName: \"kubernetes.io/projected/eaf3a251-9f75-4235-9693-15c71a77fe48-kube-api-access-qjljn\") pod \"crc-debug-wwvwt\" (UID: \"eaf3a251-9f75-4235-9693-15c71a77fe48\") " pod="openshift-must-gather-57qhv/crc-debug-wwvwt" Nov 21 15:18:07 crc kubenswrapper[4675]: I1121 15:18:07.094685 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/crc-debug-wwvwt" Nov 21 15:18:07 crc kubenswrapper[4675]: I1121 15:18:07.393101 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57qhv/crc-debug-wwvwt" event={"ID":"eaf3a251-9f75-4235-9693-15c71a77fe48","Type":"ContainerStarted","Data":"20341242639d1852841e2b83db257eeaa21896ed44e09420f7f373a06faa0451"} Nov 21 15:18:08 crc kubenswrapper[4675]: I1121 15:18:08.413261 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57qhv/crc-debug-wwvwt" event={"ID":"eaf3a251-9f75-4235-9693-15c71a77fe48","Type":"ContainerStarted","Data":"94b0942984e27f88d6ba5619e95d632b1b2c512464dce0d362406ac568a203c1"} Nov 21 15:18:08 crc kubenswrapper[4675]: I1121 15:18:08.446332 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-57qhv/crc-debug-wwvwt" podStartSLOduration=2.446310099 podStartE2EDuration="2.446310099s" podCreationTimestamp="2025-11-21 15:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:18:08.432218757 +0000 UTC m=+6365.158633484" watchObservedRunningTime="2025-11-21 15:18:08.446310099 +0000 UTC m=+6365.172724836" Nov 21 15:19:06 crc kubenswrapper[4675]: I1121 15:19:06.148174 4675 generic.go:334] "Generic (PLEG): container finished" podID="eaf3a251-9f75-4235-9693-15c71a77fe48" containerID="94b0942984e27f88d6ba5619e95d632b1b2c512464dce0d362406ac568a203c1" exitCode=0 Nov 21 15:19:06 crc kubenswrapper[4675]: I1121 15:19:06.148215 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57qhv/crc-debug-wwvwt" event={"ID":"eaf3a251-9f75-4235-9693-15c71a77fe48","Type":"ContainerDied","Data":"94b0942984e27f88d6ba5619e95d632b1b2c512464dce0d362406ac568a203c1"} Nov 21 15:19:07 crc kubenswrapper[4675]: I1121 15:19:07.292247 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/crc-debug-wwvwt" Nov 21 15:19:07 crc kubenswrapper[4675]: I1121 15:19:07.335340 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-57qhv/crc-debug-wwvwt"] Nov 21 15:19:07 crc kubenswrapper[4675]: I1121 15:19:07.347509 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-57qhv/crc-debug-wwvwt"] Nov 21 15:19:07 crc kubenswrapper[4675]: I1121 15:19:07.378014 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjljn\" (UniqueName: \"kubernetes.io/projected/eaf3a251-9f75-4235-9693-15c71a77fe48-kube-api-access-qjljn\") pod \"eaf3a251-9f75-4235-9693-15c71a77fe48\" (UID: \"eaf3a251-9f75-4235-9693-15c71a77fe48\") " Nov 21 15:19:07 crc kubenswrapper[4675]: I1121 15:19:07.378060 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eaf3a251-9f75-4235-9693-15c71a77fe48-host\") pod \"eaf3a251-9f75-4235-9693-15c71a77fe48\" (UID: \"eaf3a251-9f75-4235-9693-15c71a77fe48\") " Nov 21 15:19:07 crc kubenswrapper[4675]: I1121 15:19:07.378350 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eaf3a251-9f75-4235-9693-15c71a77fe48-host" (OuterVolumeSpecName: "host") pod "eaf3a251-9f75-4235-9693-15c71a77fe48" (UID: "eaf3a251-9f75-4235-9693-15c71a77fe48"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 15:19:07 crc kubenswrapper[4675]: I1121 15:19:07.378971 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eaf3a251-9f75-4235-9693-15c71a77fe48-host\") on node \"crc\" DevicePath \"\"" Nov 21 15:19:07 crc kubenswrapper[4675]: I1121 15:19:07.383672 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaf3a251-9f75-4235-9693-15c71a77fe48-kube-api-access-qjljn" (OuterVolumeSpecName: "kube-api-access-qjljn") pod "eaf3a251-9f75-4235-9693-15c71a77fe48" (UID: "eaf3a251-9f75-4235-9693-15c71a77fe48"). InnerVolumeSpecName "kube-api-access-qjljn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:19:07 crc kubenswrapper[4675]: I1121 15:19:07.481270 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjljn\" (UniqueName: \"kubernetes.io/projected/eaf3a251-9f75-4235-9693-15c71a77fe48-kube-api-access-qjljn\") on node \"crc\" DevicePath \"\"" Nov 21 15:19:08 crc kubenswrapper[4675]: I1121 15:19:08.174244 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20341242639d1852841e2b83db257eeaa21896ed44e09420f7f373a06faa0451" Nov 21 15:19:08 crc kubenswrapper[4675]: I1121 15:19:08.174363 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/crc-debug-wwvwt" Nov 21 15:19:08 crc kubenswrapper[4675]: I1121 15:19:08.545063 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-57qhv/crc-debug-fwwjp"] Nov 21 15:19:08 crc kubenswrapper[4675]: E1121 15:19:08.545742 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf3a251-9f75-4235-9693-15c71a77fe48" containerName="container-00" Nov 21 15:19:08 crc kubenswrapper[4675]: I1121 15:19:08.545760 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf3a251-9f75-4235-9693-15c71a77fe48" containerName="container-00" Nov 21 15:19:08 crc kubenswrapper[4675]: I1121 15:19:08.546061 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf3a251-9f75-4235-9693-15c71a77fe48" containerName="container-00" Nov 21 15:19:08 crc kubenswrapper[4675]: I1121 15:19:08.547164 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/crc-debug-fwwjp" Nov 21 15:19:08 crc kubenswrapper[4675]: I1121 15:19:08.637966 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wrvt\" (UniqueName: \"kubernetes.io/projected/644b9f84-b70b-45d8-8b64-6f55acaaafed-kube-api-access-4wrvt\") pod \"crc-debug-fwwjp\" (UID: \"644b9f84-b70b-45d8-8b64-6f55acaaafed\") " pod="openshift-must-gather-57qhv/crc-debug-fwwjp" Nov 21 15:19:08 crc kubenswrapper[4675]: I1121 15:19:08.638337 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/644b9f84-b70b-45d8-8b64-6f55acaaafed-host\") pod \"crc-debug-fwwjp\" (UID: \"644b9f84-b70b-45d8-8b64-6f55acaaafed\") " pod="openshift-must-gather-57qhv/crc-debug-fwwjp" Nov 21 15:19:08 crc kubenswrapper[4675]: I1121 15:19:08.740060 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wrvt\" (UniqueName: \"kubernetes.io/projected/644b9f84-b70b-45d8-8b64-6f55acaaafed-kube-api-access-4wrvt\") pod \"crc-debug-fwwjp\" (UID: \"644b9f84-b70b-45d8-8b64-6f55acaaafed\") " pod="openshift-must-gather-57qhv/crc-debug-fwwjp" Nov 21 15:19:08 crc kubenswrapper[4675]: I1121 15:19:08.740253 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/644b9f84-b70b-45d8-8b64-6f55acaaafed-host\") pod \"crc-debug-fwwjp\" (UID: \"644b9f84-b70b-45d8-8b64-6f55acaaafed\") " pod="openshift-must-gather-57qhv/crc-debug-fwwjp" Nov 21 15:19:08 crc kubenswrapper[4675]: I1121 15:19:08.740427 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/644b9f84-b70b-45d8-8b64-6f55acaaafed-host\") pod \"crc-debug-fwwjp\" (UID: \"644b9f84-b70b-45d8-8b64-6f55acaaafed\") " pod="openshift-must-gather-57qhv/crc-debug-fwwjp" Nov 21 15:19:08 crc kubenswrapper[4675]: I1121 15:19:08.761373 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wrvt\" (UniqueName: \"kubernetes.io/projected/644b9f84-b70b-45d8-8b64-6f55acaaafed-kube-api-access-4wrvt\") pod \"crc-debug-fwwjp\" (UID: \"644b9f84-b70b-45d8-8b64-6f55acaaafed\") " pod="openshift-must-gather-57qhv/crc-debug-fwwjp" Nov 21 15:19:08 crc kubenswrapper[4675]: I1121 15:19:08.863196 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaf3a251-9f75-4235-9693-15c71a77fe48" path="/var/lib/kubelet/pods/eaf3a251-9f75-4235-9693-15c71a77fe48/volumes" Nov 21 15:19:08 crc kubenswrapper[4675]: I1121 15:19:08.873752 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/crc-debug-fwwjp" Nov 21 15:19:09 crc kubenswrapper[4675]: I1121 15:19:09.186987 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57qhv/crc-debug-fwwjp" event={"ID":"644b9f84-b70b-45d8-8b64-6f55acaaafed","Type":"ContainerStarted","Data":"dd38028cfec4c47524799347c45054cb2ca82e7b724ec39413ebee33d610b3fc"} Nov 21 15:19:09 crc kubenswrapper[4675]: I1121 15:19:09.187047 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57qhv/crc-debug-fwwjp" event={"ID":"644b9f84-b70b-45d8-8b64-6f55acaaafed","Type":"ContainerStarted","Data":"0099b7e671bf928876d9a75b50c7761776716800356622af75b153bfaa7d75ba"} Nov 21 15:19:10 crc kubenswrapper[4675]: I1121 15:19:10.197398 4675 generic.go:334] "Generic (PLEG): container finished" podID="644b9f84-b70b-45d8-8b64-6f55acaaafed" containerID="dd38028cfec4c47524799347c45054cb2ca82e7b724ec39413ebee33d610b3fc" exitCode=0 Nov 21 15:19:10 crc kubenswrapper[4675]: I1121 15:19:10.197552 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57qhv/crc-debug-fwwjp" event={"ID":"644b9f84-b70b-45d8-8b64-6f55acaaafed","Type":"ContainerDied","Data":"dd38028cfec4c47524799347c45054cb2ca82e7b724ec39413ebee33d610b3fc"} Nov 21 15:19:11 crc kubenswrapper[4675]: I1121 15:19:11.328420 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/crc-debug-fwwjp" Nov 21 15:19:11 crc kubenswrapper[4675]: I1121 15:19:11.399868 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wrvt\" (UniqueName: \"kubernetes.io/projected/644b9f84-b70b-45d8-8b64-6f55acaaafed-kube-api-access-4wrvt\") pod \"644b9f84-b70b-45d8-8b64-6f55acaaafed\" (UID: \"644b9f84-b70b-45d8-8b64-6f55acaaafed\") " Nov 21 15:19:11 crc kubenswrapper[4675]: I1121 15:19:11.400446 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/644b9f84-b70b-45d8-8b64-6f55acaaafed-host\") pod \"644b9f84-b70b-45d8-8b64-6f55acaaafed\" (UID: \"644b9f84-b70b-45d8-8b64-6f55acaaafed\") " Nov 21 15:19:11 crc kubenswrapper[4675]: I1121 15:19:11.400499 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/644b9f84-b70b-45d8-8b64-6f55acaaafed-host" (OuterVolumeSpecName: "host") pod "644b9f84-b70b-45d8-8b64-6f55acaaafed" (UID: "644b9f84-b70b-45d8-8b64-6f55acaaafed"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 15:19:11 crc kubenswrapper[4675]: I1121 15:19:11.401558 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/644b9f84-b70b-45d8-8b64-6f55acaaafed-host\") on node \"crc\" DevicePath \"\"" Nov 21 15:19:11 crc kubenswrapper[4675]: I1121 15:19:11.415408 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644b9f84-b70b-45d8-8b64-6f55acaaafed-kube-api-access-4wrvt" (OuterVolumeSpecName: "kube-api-access-4wrvt") pod "644b9f84-b70b-45d8-8b64-6f55acaaafed" (UID: "644b9f84-b70b-45d8-8b64-6f55acaaafed"). InnerVolumeSpecName "kube-api-access-4wrvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:19:11 crc kubenswrapper[4675]: I1121 15:19:11.503207 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wrvt\" (UniqueName: \"kubernetes.io/projected/644b9f84-b70b-45d8-8b64-6f55acaaafed-kube-api-access-4wrvt\") on node \"crc\" DevicePath \"\"" Nov 21 15:19:12 crc kubenswrapper[4675]: I1121 15:19:12.220511 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57qhv/crc-debug-fwwjp" event={"ID":"644b9f84-b70b-45d8-8b64-6f55acaaafed","Type":"ContainerDied","Data":"0099b7e671bf928876d9a75b50c7761776716800356622af75b153bfaa7d75ba"} Nov 21 15:19:12 crc kubenswrapper[4675]: I1121 15:19:12.220585 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0099b7e671bf928876d9a75b50c7761776716800356622af75b153bfaa7d75ba" Nov 21 15:19:12 crc kubenswrapper[4675]: I1121 15:19:12.220674 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/crc-debug-fwwjp" Nov 21 15:19:12 crc kubenswrapper[4675]: I1121 15:19:12.657637 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-57qhv/crc-debug-fwwjp"] Nov 21 15:19:12 crc kubenswrapper[4675]: I1121 15:19:12.670194 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-57qhv/crc-debug-fwwjp"] Nov 21 15:19:12 crc kubenswrapper[4675]: I1121 15:19:12.863906 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644b9f84-b70b-45d8-8b64-6f55acaaafed" path="/var/lib/kubelet/pods/644b9f84-b70b-45d8-8b64-6f55acaaafed/volumes" Nov 21 15:19:13 crc kubenswrapper[4675]: I1121 15:19:13.854455 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-57qhv/crc-debug-gr9w7"] Nov 21 15:19:13 crc kubenswrapper[4675]: E1121 15:19:13.855506 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644b9f84-b70b-45d8-8b64-6f55acaaafed" containerName="container-00" Nov 21 15:19:13 crc kubenswrapper[4675]: I1121 15:19:13.855532 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="644b9f84-b70b-45d8-8b64-6f55acaaafed" containerName="container-00" Nov 21 15:19:13 crc kubenswrapper[4675]: I1121 15:19:13.856005 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="644b9f84-b70b-45d8-8b64-6f55acaaafed" containerName="container-00" Nov 21 15:19:13 crc kubenswrapper[4675]: I1121 15:19:13.857131 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/crc-debug-gr9w7" Nov 21 15:19:13 crc kubenswrapper[4675]: I1121 15:19:13.976600 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rctvs\" (UniqueName: \"kubernetes.io/projected/4cfee818-d0a1-468e-94d2-237d75fed9f5-kube-api-access-rctvs\") pod \"crc-debug-gr9w7\" (UID: \"4cfee818-d0a1-468e-94d2-237d75fed9f5\") " pod="openshift-must-gather-57qhv/crc-debug-gr9w7" Nov 21 15:19:13 crc kubenswrapper[4675]: I1121 15:19:13.976904 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cfee818-d0a1-468e-94d2-237d75fed9f5-host\") pod \"crc-debug-gr9w7\" (UID: \"4cfee818-d0a1-468e-94d2-237d75fed9f5\") " pod="openshift-must-gather-57qhv/crc-debug-gr9w7" Nov 21 15:19:14 crc kubenswrapper[4675]: I1121 15:19:14.079746 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cfee818-d0a1-468e-94d2-237d75fed9f5-host\") pod \"crc-debug-gr9w7\" (UID: \"4cfee818-d0a1-468e-94d2-237d75fed9f5\") " pod="openshift-must-gather-57qhv/crc-debug-gr9w7" Nov 21 15:19:14 crc kubenswrapper[4675]: I1121 15:19:14.079852 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rctvs\" (UniqueName: \"kubernetes.io/projected/4cfee818-d0a1-468e-94d2-237d75fed9f5-kube-api-access-rctvs\") pod \"crc-debug-gr9w7\" (UID: \"4cfee818-d0a1-468e-94d2-237d75fed9f5\") " pod="openshift-must-gather-57qhv/crc-debug-gr9w7" Nov 21 15:19:14 crc kubenswrapper[4675]: I1121 15:19:14.079905 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cfee818-d0a1-468e-94d2-237d75fed9f5-host\") pod \"crc-debug-gr9w7\" (UID: \"4cfee818-d0a1-468e-94d2-237d75fed9f5\") " pod="openshift-must-gather-57qhv/crc-debug-gr9w7" Nov 21 15:19:14 crc kubenswrapper[4675]: I1121 15:19:14.097603 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rctvs\" (UniqueName: \"kubernetes.io/projected/4cfee818-d0a1-468e-94d2-237d75fed9f5-kube-api-access-rctvs\") pod \"crc-debug-gr9w7\" (UID: \"4cfee818-d0a1-468e-94d2-237d75fed9f5\") " pod="openshift-must-gather-57qhv/crc-debug-gr9w7" Nov 21 15:19:14 crc kubenswrapper[4675]: I1121 15:19:14.177104 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/crc-debug-gr9w7" Nov 21 15:19:14 crc kubenswrapper[4675]: I1121 15:19:14.245281 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57qhv/crc-debug-gr9w7" event={"ID":"4cfee818-d0a1-468e-94d2-237d75fed9f5","Type":"ContainerStarted","Data":"4fcebdb8f8d859ca6984fef792dd1177b4e9d24bcfef7b5c8d8c1a463fdd1f80"} Nov 21 15:19:15 crc kubenswrapper[4675]: I1121 15:19:15.256975 4675 generic.go:334] "Generic (PLEG): container finished" podID="4cfee818-d0a1-468e-94d2-237d75fed9f5" containerID="2bcbb7248aca6f8935042c8112f47f01721013791379238108efe6693f389ec9" exitCode=0 Nov 21 15:19:15 crc kubenswrapper[4675]: I1121 15:19:15.257095 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57qhv/crc-debug-gr9w7" event={"ID":"4cfee818-d0a1-468e-94d2-237d75fed9f5","Type":"ContainerDied","Data":"2bcbb7248aca6f8935042c8112f47f01721013791379238108efe6693f389ec9"} Nov 21 15:19:15 crc kubenswrapper[4675]: I1121 15:19:15.299161 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-57qhv/crc-debug-gr9w7"] Nov 21 15:19:15 crc kubenswrapper[4675]: I1121 15:19:15.310907 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-57qhv/crc-debug-gr9w7"] Nov 21 15:19:16 crc kubenswrapper[4675]: I1121 15:19:16.425146 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/crc-debug-gr9w7" Nov 21 15:19:16 crc kubenswrapper[4675]: I1121 15:19:16.539818 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rctvs\" (UniqueName: \"kubernetes.io/projected/4cfee818-d0a1-468e-94d2-237d75fed9f5-kube-api-access-rctvs\") pod \"4cfee818-d0a1-468e-94d2-237d75fed9f5\" (UID: \"4cfee818-d0a1-468e-94d2-237d75fed9f5\") " Nov 21 15:19:16 crc kubenswrapper[4675]: I1121 15:19:16.540386 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cfee818-d0a1-468e-94d2-237d75fed9f5-host\") pod \"4cfee818-d0a1-468e-94d2-237d75fed9f5\" (UID: \"4cfee818-d0a1-468e-94d2-237d75fed9f5\") " Nov 21 15:19:16 crc kubenswrapper[4675]: I1121 15:19:16.540421 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cfee818-d0a1-468e-94d2-237d75fed9f5-host" (OuterVolumeSpecName: "host") pod "4cfee818-d0a1-468e-94d2-237d75fed9f5" (UID: "4cfee818-d0a1-468e-94d2-237d75fed9f5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 15:19:16 crc kubenswrapper[4675]: I1121 15:19:16.541229 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cfee818-d0a1-468e-94d2-237d75fed9f5-host\") on node \"crc\" DevicePath \"\"" Nov 21 15:19:16 crc kubenswrapper[4675]: I1121 15:19:16.546459 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cfee818-d0a1-468e-94d2-237d75fed9f5-kube-api-access-rctvs" (OuterVolumeSpecName: "kube-api-access-rctvs") pod "4cfee818-d0a1-468e-94d2-237d75fed9f5" (UID: "4cfee818-d0a1-468e-94d2-237d75fed9f5"). InnerVolumeSpecName "kube-api-access-rctvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:19:16 crc kubenswrapper[4675]: I1121 15:19:16.643333 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rctvs\" (UniqueName: \"kubernetes.io/projected/4cfee818-d0a1-468e-94d2-237d75fed9f5-kube-api-access-rctvs\") on node \"crc\" DevicePath \"\"" Nov 21 15:19:16 crc kubenswrapper[4675]: I1121 15:19:16.865444 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cfee818-d0a1-468e-94d2-237d75fed9f5" path="/var/lib/kubelet/pods/4cfee818-d0a1-468e-94d2-237d75fed9f5/volumes" Nov 21 15:19:17 crc kubenswrapper[4675]: I1121 15:19:17.297148 4675 scope.go:117] "RemoveContainer" containerID="2bcbb7248aca6f8935042c8112f47f01721013791379238108efe6693f389ec9" Nov 21 15:19:17 crc kubenswrapper[4675]: I1121 15:19:17.297183 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/crc-debug-gr9w7" Nov 21 15:19:45 crc kubenswrapper[4675]: I1121 15:19:45.356188 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e1986ade-c95f-42c9-9ae4-8518e89cb7b8/aodh-api/0.log" Nov 21 15:19:45 crc kubenswrapper[4675]: I1121 15:19:45.504894 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e1986ade-c95f-42c9-9ae4-8518e89cb7b8/aodh-evaluator/0.log" Nov 21 15:19:45 crc kubenswrapper[4675]: I1121 15:19:45.550751 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e1986ade-c95f-42c9-9ae4-8518e89cb7b8/aodh-listener/0.log" Nov 21 15:19:45 crc kubenswrapper[4675]: I1121 15:19:45.595306 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e1986ade-c95f-42c9-9ae4-8518e89cb7b8/aodh-notifier/0.log" Nov 21 15:19:45 crc kubenswrapper[4675]: I1121 15:19:45.737154 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85bfdcc858-xc95t_370d8c4c-811f-4e1e-b801-828d8fa5d1c2/barbican-api/0.log" Nov 21 15:19:45 crc kubenswrapper[4675]: I1121 15:19:45.739849 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85bfdcc858-xc95t_370d8c4c-811f-4e1e-b801-828d8fa5d1c2/barbican-api-log/0.log" Nov 21 15:19:45 crc kubenswrapper[4675]: I1121 15:19:45.973658 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-57c587945d-p7z7g_521461a2-7f1f-43b2-8ff9-be3a054e25f6/barbican-keystone-listener/0.log" Nov 21 15:19:46 crc kubenswrapper[4675]: I1121 15:19:46.020496 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6998886fc9-xdttj_d4328076-0e3d-40b2-b686-502e7f263a2c/barbican-worker/0.log" Nov 21 15:19:46 crc kubenswrapper[4675]: I1121 15:19:46.118944 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-57c587945d-p7z7g_521461a2-7f1f-43b2-8ff9-be3a054e25f6/barbican-keystone-listener-log/0.log" Nov 21 15:19:46 crc kubenswrapper[4675]: I1121 15:19:46.136308 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:19:46 crc kubenswrapper[4675]: I1121 15:19:46.136361 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:19:46 crc kubenswrapper[4675]: I1121 15:19:46.217576 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6998886fc9-xdttj_d4328076-0e3d-40b2-b686-502e7f263a2c/barbican-worker-log/0.log" Nov 21 15:19:46 crc kubenswrapper[4675]: I1121 15:19:46.262820 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-j7clf_fb55d1ca-c721-4bca-9a73-e01fa4da2008/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:46 crc kubenswrapper[4675]: I1121 15:19:46.464780 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d34d8b0b-08da-4455-b70a-e4a7a4dff526/ceilometer-notification-agent/0.log" Nov 21 15:19:46 crc kubenswrapper[4675]: I1121 15:19:46.488157 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d34d8b0b-08da-4455-b70a-e4a7a4dff526/proxy-httpd/0.log" Nov 21 15:19:46 crc kubenswrapper[4675]: I1121 15:19:46.546795 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d34d8b0b-08da-4455-b70a-e4a7a4dff526/sg-core/0.log" Nov 21 15:19:46 crc kubenswrapper[4675]: I1121 15:19:46.603475 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d34d8b0b-08da-4455-b70a-e4a7a4dff526/ceilometer-central-agent/0.log" Nov 21 15:19:46 crc kubenswrapper[4675]: I1121 15:19:46.752125 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_64cafc2c-04de-4090-9026-2b986fcae86a/cinder-api/0.log" Nov 21 15:19:46 crc kubenswrapper[4675]: I1121 15:19:46.777719 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_64cafc2c-04de-4090-9026-2b986fcae86a/cinder-api-log/0.log" Nov 21 15:19:46 crc kubenswrapper[4675]: I1121 15:19:46.954892 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2c8951ed-3fad-45f7-ab94-b1843d1c4114/cinder-scheduler/1.log" Nov 21 15:19:46 crc kubenswrapper[4675]: I1121 15:19:46.986647 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2c8951ed-3fad-45f7-ab94-b1843d1c4114/cinder-scheduler/0.log" Nov 21 15:19:47 crc kubenswrapper[4675]: I1121 15:19:47.052129 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2c8951ed-3fad-45f7-ab94-b1843d1c4114/probe/0.log" Nov 21 15:19:47 crc kubenswrapper[4675]: I1121 15:19:47.164531 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-tn87w_7b8cd75a-8bee-4db2-ba7a-19cf0947fbe6/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:47 crc kubenswrapper[4675]: I1121 15:19:47.281020 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-4hdpm_e542a2fc-0fd2-49fa-873e-1d580edd93d4/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:47 crc kubenswrapper[4675]: I1121 15:19:47.369856 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-hsxmx_2d80929c-c14e-4ec5-943f-de21d45af551/init/0.log" Nov 21 15:19:47 crc kubenswrapper[4675]: I1121 15:19:47.587433 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-hsxmx_2d80929c-c14e-4ec5-943f-de21d45af551/init/0.log" Nov 21 15:19:47 crc kubenswrapper[4675]: I1121 15:19:47.648261 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-jqzl9_a3366490-72da-4662-a609-d3fd320bac49/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:47 crc kubenswrapper[4675]: I1121 15:19:47.680291 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-hsxmx_2d80929c-c14e-4ec5-943f-de21d45af551/dnsmasq-dns/0.log" Nov 21 15:19:47 crc kubenswrapper[4675]: I1121 15:19:47.907955 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc/glance-httpd/0.log" Nov 21 15:19:47 crc kubenswrapper[4675]: I1121 15:19:47.926668 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fcb2db9c-ab36-40ae-8a0a-e3a48a9b92bc/glance-log/0.log" Nov 21 15:19:48 crc kubenswrapper[4675]: I1121 15:19:48.096343 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e08a8ae1-1033-4b31-89df-b85614075cbf/glance-httpd/0.log" Nov 21 15:19:48 crc kubenswrapper[4675]: I1121 15:19:48.132233 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e08a8ae1-1033-4b31-89df-b85614075cbf/glance-log/0.log" Nov 21 15:19:48 crc kubenswrapper[4675]: I1121 15:19:48.652499 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7f8454c7d4-x6h7x_a48b13f7-e6d5-448e-b83e-be3b66c31fb0/heat-engine/0.log" Nov 21 15:19:48 crc kubenswrapper[4675]: I1121 15:19:48.971345 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-l69j6_cfe1a316-0dad-402c-b056-2302e5fe219a/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:49 crc kubenswrapper[4675]: I1121 15:19:49.094432 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5f97697d96-mp958_5bdb7df7-dd8e-4fea-9634-65fa6f741de8/heat-cfnapi/0.log" Nov 21 15:19:49 crc kubenswrapper[4675]: I1121 15:19:49.114322 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8tsm2_ec8162e7-cc12-48eb-982d-036b866eaeb0/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:49 crc kubenswrapper[4675]: I1121 15:19:49.137831 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-77f4868784-6nk2h_148b5a1d-39fe-4a33-88ee-97b3383595ff/heat-api/0.log" Nov 21 15:19:49 crc kubenswrapper[4675]: I1121 15:19:49.496945 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29395621-7g6lb_06506868-9f56-4ca3-870a-bf6062173504/keystone-cron/0.log" Nov 21 15:19:49 crc kubenswrapper[4675]: I1121 15:19:49.505987 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29395561-scxxf_6a16764c-944a-48be-ba08-7b46b89ffdba/keystone-cron/0.log" Nov 21 15:19:49 crc kubenswrapper[4675]: I1121 15:19:49.648372 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7ff9b4b9fd-shbm9_f494051d-de96-4044-a28d-3b05672b5a66/keystone-api/0.log" Nov 21 15:19:49 crc kubenswrapper[4675]: I1121 15:19:49.745819 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4e5dacaf-d4a4-4580-9ff2-3e7d548f0e4a/kube-state-metrics/0.log" Nov 21 15:19:49 crc kubenswrapper[4675]: I1121 15:19:49.836757 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-w6mlh_31f6cccc-b0b8-4b5e-b4f3-ba68ef83f687/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:49 crc kubenswrapper[4675]: I1121 15:19:49.945898 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-mkqjd_88b1961a-032d-40c5-83f7-602511b7808e/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:50 crc kubenswrapper[4675]: I1121 15:19:50.151972 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_151c5400-b481-4494-aacd-020595cc112c/mysqld-exporter/0.log" Nov 21 15:19:50 crc kubenswrapper[4675]: I1121 15:19:50.491793 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dbf4b8f9c-tnln2_7d084a12-d301-4ea1-b049-ca6211a8929d/neutron-httpd/0.log" Nov 21 15:19:50 crc kubenswrapper[4675]: I1121 15:19:50.530203 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jgdqz_50a8108e-2cd1-42e7-9efe-5c2478adb797/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:50 crc kubenswrapper[4675]: I1121 15:19:50.596187 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dbf4b8f9c-tnln2_7d084a12-d301-4ea1-b049-ca6211a8929d/neutron-api/0.log" Nov 21 15:19:51 crc kubenswrapper[4675]: I1121 15:19:51.221823 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_18c40348-4d27-4b4c-9b8a-eac9b8b7252a/nova-cell0-conductor-conductor/0.log" Nov 21 15:19:51 crc kubenswrapper[4675]: I1121 15:19:51.352778 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5bfa0c26-ff80-4079-aef8-6cc1a62ba554/nova-api-log/0.log" Nov 21 15:19:51 crc kubenswrapper[4675]: I1121 15:19:51.460354 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a2816c5b-51b2-4542-b0ff-cdc5bb61c948/nova-cell1-conductor-conductor/0.log" Nov 21 15:19:51 crc kubenswrapper[4675]: I1121 15:19:51.782934 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-vlb78_2205f0b5-339c-4165-84fd-9c9f117d757f/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:51 crc kubenswrapper[4675]: I1121 15:19:51.794463 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_25ba2d9f-d85c-403d-b7e6-8b17f48e4316/nova-cell1-novncproxy-novncproxy/0.log" Nov 21 15:19:51 crc kubenswrapper[4675]: I1121 15:19:51.994201 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5bfa0c26-ff80-4079-aef8-6cc1a62ba554/nova-api-api/0.log" Nov 21 15:19:52 crc kubenswrapper[4675]: I1121 15:19:52.105226 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f2e69762-ea6c-4d7a-a407-8373c1c7b734/nova-metadata-log/0.log" Nov 21 15:19:52 crc kubenswrapper[4675]: I1121 15:19:52.454158 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c9adb63e-74d2-48f6-b639-4b22def78e35/mysql-bootstrap/0.log" Nov 21 15:19:52 crc kubenswrapper[4675]: I1121 15:19:52.531721 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a675b127-f342-4527-b0f1-9e668fcf5ede/nova-scheduler-scheduler/0.log" Nov 21 15:19:52 crc kubenswrapper[4675]: I1121 15:19:52.618801 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c9adb63e-74d2-48f6-b639-4b22def78e35/mysql-bootstrap/0.log" Nov 21 15:19:52 crc kubenswrapper[4675]: I1121 15:19:52.658867 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c9adb63e-74d2-48f6-b639-4b22def78e35/galera/0.log" Nov 21 15:19:52 crc kubenswrapper[4675]: I1121 15:19:52.852834 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_05bf6265-2f8a-4d78-9f5a-05304816937d/mysql-bootstrap/0.log" Nov 21 15:19:52 crc kubenswrapper[4675]: I1121 15:19:52.994808 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_05bf6265-2f8a-4d78-9f5a-05304816937d/mysql-bootstrap/0.log" Nov 21 15:19:53 crc kubenswrapper[4675]: I1121 15:19:53.057087 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_05bf6265-2f8a-4d78-9f5a-05304816937d/galera/0.log" Nov 21 15:19:53 crc kubenswrapper[4675]: I1121 15:19:53.381751 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3b6ec2a5-ea89-459f-b66c-4822e68f1498/openstackclient/0.log" Nov 21 15:19:53 crc kubenswrapper[4675]: I1121 15:19:53.386999 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-l5r9b_4a15b97a-aa41-4d4d-8f75-0b3d2193eded/ovn-controller/0.log" Nov 21 15:19:53 crc kubenswrapper[4675]: I1121 15:19:53.605245 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xc9np_c1c50a5b-1bd7-4c2a-9424-770e8170212e/openstack-network-exporter/0.log" Nov 21 15:19:53 crc kubenswrapper[4675]: I1121 15:19:53.820770 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-j7prf_90977e3d-e36b-4b13-b7f8-f98a6fdc56bc/ovsdb-server-init/0.log" Nov 21 15:19:54 crc kubenswrapper[4675]: I1121 15:19:54.042397 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-j7prf_90977e3d-e36b-4b13-b7f8-f98a6fdc56bc/ovsdb-server/0.log" Nov 21 15:19:54 crc kubenswrapper[4675]: I1121 15:19:54.055402 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-j7prf_90977e3d-e36b-4b13-b7f8-f98a6fdc56bc/ovsdb-server-init/0.log" Nov 21 15:19:54 crc kubenswrapper[4675]: I1121 15:19:54.082303 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-j7prf_90977e3d-e36b-4b13-b7f8-f98a6fdc56bc/ovs-vswitchd/0.log" Nov 21 15:19:54 crc kubenswrapper[4675]: I1121 15:19:54.281570 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-rvbj7_cd5e1c55-691e-40cc-9e53-b905864402fb/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:54 crc kubenswrapper[4675]: I1121 15:19:54.527247 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e5d93705-ae99-48ab-99e3-1e225f06ab6e/openstack-network-exporter/0.log" Nov 21 15:19:54 crc kubenswrapper[4675]: I1121 15:19:54.569215 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e5d93705-ae99-48ab-99e3-1e225f06ab6e/ovn-northd/0.log" Nov 21 15:19:54 crc kubenswrapper[4675]: I1121 15:19:54.731765 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b1a22076-aa43-4fe3-83ad-1a3e22d3abc7/openstack-network-exporter/0.log" Nov 21 15:19:54 crc kubenswrapper[4675]: I1121 15:19:54.768709 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f2e69762-ea6c-4d7a-a407-8373c1c7b734/nova-metadata-metadata/0.log" Nov 21 15:19:54 crc kubenswrapper[4675]: I1121 15:19:54.789480 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b1a22076-aa43-4fe3-83ad-1a3e22d3abc7/ovsdbserver-nb/0.log" Nov 21 15:19:55 crc kubenswrapper[4675]: I1121 15:19:55.013128 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b5c900d8-26df-4201-9693-318f45bb93d8/openstack-network-exporter/0.log" Nov 21 15:19:55 crc kubenswrapper[4675]: I1121 15:19:55.034715 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b5c900d8-26df-4201-9693-318f45bb93d8/ovsdbserver-sb/0.log" Nov 21 15:19:55 crc kubenswrapper[4675]: I1121 15:19:55.324428 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79/init-config-reloader/0.log" Nov 21 15:19:55 crc kubenswrapper[4675]: I1121 15:19:55.324890 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64487ff74-sfh5j_2ff36b45-ea40-44fd-84fe-dc732a5af439/placement-api/0.log" Nov 21 15:19:55 crc kubenswrapper[4675]: I1121 15:19:55.403164 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64487ff74-sfh5j_2ff36b45-ea40-44fd-84fe-dc732a5af439/placement-log/0.log" Nov 21 15:19:55 crc kubenswrapper[4675]: I1121 15:19:55.584924 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79/init-config-reloader/0.log" Nov 21 15:19:55 crc kubenswrapper[4675]: I1121 15:19:55.585731 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79/config-reloader/0.log" Nov 21 15:19:55 crc kubenswrapper[4675]: I1121 15:19:55.594857 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79/prometheus/0.log" Nov 21 15:19:55 crc kubenswrapper[4675]: I1121 15:19:55.679982 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ac0a374c-2ff5-4fa8-b6be-5d4ae4445a79/thanos-sidecar/0.log" Nov 21 15:19:55 crc kubenswrapper[4675]: I1121 15:19:55.757117 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8ae83905-939b-4ae5-bab9-993356ce17b8/memcached/0.log" Nov 21 15:19:55 crc kubenswrapper[4675]: I1121 15:19:55.846861 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6b2ab3dd-83aa-4d37-8f44-bb3d277932fb/setup-container/0.log" Nov 21 15:19:56 crc kubenswrapper[4675]: I1121 15:19:56.000405 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a5ef674f-8b42-40b1-ba1a-fa2d68858b31/setup-container/0.log" Nov 21 15:19:56 crc kubenswrapper[4675]: I1121 15:19:56.014589 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6b2ab3dd-83aa-4d37-8f44-bb3d277932fb/setup-container/0.log" Nov 21 15:19:56 crc kubenswrapper[4675]: I1121 15:19:56.023158 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6b2ab3dd-83aa-4d37-8f44-bb3d277932fb/rabbitmq/0.log" Nov 21 15:19:56 crc kubenswrapper[4675]: I1121 15:19:56.207118 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a5ef674f-8b42-40b1-ba1a-fa2d68858b31/setup-container/0.log" Nov 21 15:19:56 crc kubenswrapper[4675]: I1121 15:19:56.224586 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a5ef674f-8b42-40b1-ba1a-fa2d68858b31/rabbitmq/0.log" Nov 21 15:19:56 crc kubenswrapper[4675]: I1121 15:19:56.246000 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2fb22_ea3b33b3-6f01-403c-87ed-3c0727db2a97/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:56 crc kubenswrapper[4675]: I1121 15:19:56.414185 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-2xt7s_021eb0fa-a9a9-4af1-bc66-8b868fa3c41c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:56 crc kubenswrapper[4675]: I1121 15:19:56.430691 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-gdztp_de094806-84d9-4903-be6c-c00e33b1e782/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:56 crc kubenswrapper[4675]: I1121 15:19:56.537941 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-rblk7_d47864c9-0269-47a5-b718-bce3541df7c5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:56 crc kubenswrapper[4675]: I1121 15:19:56.653177 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-dpdgb_7c34957b-d4df-4448-9396-9e7244dc85b5/ssh-known-hosts-edpm-deployment/0.log" Nov 21 15:19:56 crc kubenswrapper[4675]: I1121 15:19:56.786308 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5cb8d89bd7-jsjdv_35b58484-6cb2-4edc-bea9-4d3a8d6b1479/proxy-server/0.log" Nov 21 15:19:56 crc kubenswrapper[4675]: I1121 15:19:56.873586 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5cb8d89bd7-jsjdv_35b58484-6cb2-4edc-bea9-4d3a8d6b1479/proxy-httpd/0.log" Nov 21 15:19:56 crc kubenswrapper[4675]: I1121 15:19:56.891498 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-6s7pj_e01d9dde-a9f3-4efc-8997-bf3914cffde9/swift-ring-rebalance/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.020156 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/account-auditor/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.050294 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/account-reaper/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.165807 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/account-replicator/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.178933 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/account-server/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.248802 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/container-auditor/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.293396 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/container-server/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.323183 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/container-replicator/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.354690 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/container-updater/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.391884 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/object-auditor/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.484382 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/object-expirer/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.506664 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/object-replicator/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.527401 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/object-server/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.564086 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/object-updater/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.588339 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/rsync/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.695193 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29cc3528-47d5-4479-85fc-37f8e53f1caf/swift-recon-cron/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.756900 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-pqp25_6068452e-fbc5-44c6-8141-d3b8b3de6f92/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:57 crc kubenswrapper[4675]: I1121 15:19:57.906438 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-l9mdg_087ded3f-0cd4-4471-b0b8-f23a7de03a26/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:58 crc kubenswrapper[4675]: I1121 15:19:58.253433 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_bdd9ab23-ad3b-4d5e-8f26-6c6cf175fa94/test-operator-logs-container/0.log" Nov 21 15:19:58 crc kubenswrapper[4675]: I1121 15:19:58.347723 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-t67kg_045679ad-48d6-48ed-a9a5-8699cc283733/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:19:58 crc kubenswrapper[4675]: I1121 15:19:58.605020 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_71faa523-7927-4fc1-bb12-0f787758620a/tempest-tests-tempest-tests-runner/0.log" Nov 21 15:20:16 crc kubenswrapper[4675]: I1121 15:20:16.136330 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:20:16 crc kubenswrapper[4675]: I1121 15:20:16.137812 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:20:21 crc kubenswrapper[4675]: I1121 15:20:21.488222 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n_4ebf20ac-e131-4e83-8493-aab35b1f206a/util/0.log" Nov 21 15:20:21 crc kubenswrapper[4675]: I1121 15:20:21.654514 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n_4ebf20ac-e131-4e83-8493-aab35b1f206a/util/0.log" Nov 21 15:20:21 crc kubenswrapper[4675]: I1121 15:20:21.657551 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n_4ebf20ac-e131-4e83-8493-aab35b1f206a/pull/0.log" Nov 21 15:20:21 crc kubenswrapper[4675]: I1121 15:20:21.658248 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n_4ebf20ac-e131-4e83-8493-aab35b1f206a/pull/0.log" Nov 21 15:20:21 crc kubenswrapper[4675]: I1121 15:20:21.842984 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n_4ebf20ac-e131-4e83-8493-aab35b1f206a/pull/0.log" Nov 21 15:20:21 crc kubenswrapper[4675]: I1121 15:20:21.846763 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n_4ebf20ac-e131-4e83-8493-aab35b1f206a/util/0.log" Nov 21 15:20:21 crc kubenswrapper[4675]: I1121 15:20:21.875318 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdh7t6n_4ebf20ac-e131-4e83-8493-aab35b1f206a/extract/0.log" Nov 21 15:20:22 crc kubenswrapper[4675]: I1121 15:20:22.026269 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-k4lls_f3631bac-6fa8-4ad8-bbad-df880af19292/kube-rbac-proxy/0.log" Nov 21 15:20:22 crc kubenswrapper[4675]: I1121 15:20:22.100448 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-k4lls_f3631bac-6fa8-4ad8-bbad-df880af19292/manager/0.log" Nov 21 15:20:22 crc kubenswrapper[4675]: I1121 15:20:22.160287 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-h9twj_7e3588ab-94d7-482f-97c4-67d573181e2c/kube-rbac-proxy/0.log" Nov 21 15:20:22 crc kubenswrapper[4675]: I1121 15:20:22.317171 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-h9twj_7e3588ab-94d7-482f-97c4-67d573181e2c/manager/0.log" Nov 21 15:20:22 crc kubenswrapper[4675]: I1121 15:20:22.333963 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-dd9xn_45c2d5a9-a319-4012-91de-77769b6ad913/kube-rbac-proxy/0.log" Nov 21 15:20:22 crc kubenswrapper[4675]: I1121 15:20:22.354939 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-dd9xn_45c2d5a9-a319-4012-91de-77769b6ad913/manager/0.log" Nov 21 15:20:22 crc kubenswrapper[4675]: I1121 15:20:22.486264 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-726dt_26fa3df8-f4d3-44d1-8e9b-c20dca446570/kube-rbac-proxy/0.log" Nov 21 15:20:22 crc kubenswrapper[4675]: I1121 15:20:22.590765 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-726dt_26fa3df8-f4d3-44d1-8e9b-c20dca446570/manager/0.log" Nov 21 15:20:22 crc kubenswrapper[4675]: I1121 15:20:22.709902 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-4gf9h_2808d52f-0a70-48df-9b55-052faa81f93c/kube-rbac-proxy/0.log" Nov 21 15:20:22 crc kubenswrapper[4675]: I1121 15:20:22.794467 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-hld5f_a3c92b8e-62bc-4b54-b1ce-a32c275cd9ca/kube-rbac-proxy/0.log" Nov 21 15:20:22 crc kubenswrapper[4675]: I1121 15:20:22.818544 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-4gf9h_2808d52f-0a70-48df-9b55-052faa81f93c/manager/0.log" Nov 21 15:20:22 crc kubenswrapper[4675]: I1121 15:20:22.929557 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-hld5f_a3c92b8e-62bc-4b54-b1ce-a32c275cd9ca/manager/0.log" Nov 21 15:20:22 crc kubenswrapper[4675]: I1121 15:20:22.997695 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-qb7zq_43da63e0-75a0-4e90-9e81-3b3be38a45b1/kube-rbac-proxy/0.log" Nov 21 15:20:23 crc kubenswrapper[4675]: I1121 15:20:23.200784 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-qb7zq_43da63e0-75a0-4e90-9e81-3b3be38a45b1/manager/0.log" Nov 21 15:20:23 crc kubenswrapper[4675]: I1121 15:20:23.243145 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-kxfcm_2edf1cc1-0bd0-4329-969f-c2890b507972/kube-rbac-proxy/0.log" Nov 21 15:20:23 crc kubenswrapper[4675]: I1121 15:20:23.243303 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-kxfcm_2edf1cc1-0bd0-4329-969f-c2890b507972/manager/0.log" Nov 21 15:20:23 crc kubenswrapper[4675]: I1121 15:20:23.407565 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-lmpfg_89aec3aa-b2d8-4702-b0fd-005c6d51c669/kube-rbac-proxy/0.log" Nov 21 15:20:23 crc kubenswrapper[4675]: I1121 15:20:23.502572 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-lmpfg_89aec3aa-b2d8-4702-b0fd-005c6d51c669/manager/0.log" Nov 21 15:20:23 crc kubenswrapper[4675]: I1121 15:20:23.596213 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-lf6g7_cc7542b4-b4d9-46e5-8819-784ec50c9c11/manager/0.log" Nov 21 15:20:23 crc kubenswrapper[4675]: I1121 15:20:23.620943 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-lf6g7_cc7542b4-b4d9-46e5-8819-784ec50c9c11/kube-rbac-proxy/0.log" Nov 21 15:20:23 crc kubenswrapper[4675]: I1121 15:20:23.717429 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-p4r2x_eb3d3afa-eaa5-4271-8a33-45a009a9742a/kube-rbac-proxy/0.log" Nov 21 15:20:23 crc kubenswrapper[4675]: I1121 15:20:23.879737 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-p4r2x_eb3d3afa-eaa5-4271-8a33-45a009a9742a/manager/0.log" Nov 21 15:20:23 crc kubenswrapper[4675]: I1121 15:20:23.967448 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-46782_27a202b7-1cf0-4dda-a010-6d59fbe881ed/kube-rbac-proxy/0.log" Nov 21 15:20:24 crc kubenswrapper[4675]: I1121 15:20:24.034944 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-46782_27a202b7-1cf0-4dda-a010-6d59fbe881ed/manager/0.log" Nov 21 15:20:24 crc kubenswrapper[4675]: I1121 15:20:24.136748 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-nmtt7_693e699a-cdc4-4282-9ba6-6947c3e42726/kube-rbac-proxy/0.log" Nov 21 15:20:24 crc kubenswrapper[4675]: I1121 15:20:24.260307 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-nmtt7_693e699a-cdc4-4282-9ba6-6947c3e42726/manager/0.log" Nov 21 15:20:24 crc kubenswrapper[4675]: I1121 15:20:24.374473 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-bhcrz_e8768c70-accf-460e-a781-b5d9eff26f2e/kube-rbac-proxy/0.log" Nov 21 15:20:24 crc kubenswrapper[4675]: I1121 15:20:24.375143 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-bhcrz_e8768c70-accf-460e-a781-b5d9eff26f2e/manager/0.log" Nov 21 15:20:24 crc kubenswrapper[4675]: I1121 15:20:24.503422 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-cwz25_77644f3e-1a90-4f49-a43c-b3d5b23c8184/kube-rbac-proxy/0.log" Nov 21 15:20:24 crc kubenswrapper[4675]: I1121 15:20:24.584522 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-cwz25_77644f3e-1a90-4f49-a43c-b3d5b23c8184/manager/0.log" Nov 21 15:20:24 crc kubenswrapper[4675]: I1121 15:20:24.977567 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4lmmn_d570523a-f2e0-4913-a405-ac5b8582b059/registry-server/0.log" Nov 21 15:20:25 crc kubenswrapper[4675]: I1121 15:20:25.030498 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7bc9ddc77b-27rxv_867fb4c9-f1c0-49da-9a71-3372347fe4f2/operator/0.log" Nov 21 15:20:25 crc kubenswrapper[4675]: I1121 15:20:25.178632 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-p2fwv_0aaf2b35-164b-400a-ad78-84961c2a599c/kube-rbac-proxy/0.log" Nov 21 15:20:25 crc kubenswrapper[4675]: I1121 15:20:25.368101 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-p2fwv_0aaf2b35-164b-400a-ad78-84961c2a599c/manager/0.log" Nov 21 15:20:25 crc kubenswrapper[4675]: I1121 15:20:25.435236 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-dcswk_5661f60c-1801-419e-abaa-7f5e0825f148/kube-rbac-proxy/0.log" Nov 21 15:20:25 crc kubenswrapper[4675]: I1121 15:20:25.641219 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-dcswk_5661f60c-1801-419e-abaa-7f5e0825f148/manager/0.log" Nov 21 15:20:25 crc kubenswrapper[4675]: I1121 15:20:25.770667 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-65k5k_fec54436-1bcf-4e1e-af27-d86372b07bbe/operator/0.log" Nov 21 15:20:25 crc kubenswrapper[4675]: I1121 15:20:25.908048 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-rpmxk_e5d16474-89e2-4e35-8339-24afbb962e4b/kube-rbac-proxy/0.log" Nov 21 15:20:25 crc kubenswrapper[4675]: I1121 15:20:25.992671 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-rpmxk_e5d16474-89e2-4e35-8339-24afbb962e4b/manager/0.log" Nov 21 15:20:26 crc kubenswrapper[4675]: I1121 15:20:26.014529 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7fc59d4bfd-8swxd_cfb97bdd-e357-475c-ab5c-184e50acb0dc/kube-rbac-proxy/0.log" Nov 21 15:20:26 crc kubenswrapper[4675]: I1121 15:20:26.318431 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-2l49b_dd5c12ec-ee26-458d-85f3-2b6bd7c021f1/manager/0.log" Nov 21 15:20:26 crc kubenswrapper[4675]: I1121 15:20:26.354089 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-2l49b_dd5c12ec-ee26-458d-85f3-2b6bd7c021f1/kube-rbac-proxy/0.log" Nov 21 15:20:26 crc kubenswrapper[4675]: I1121 15:20:26.508409 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-55qzb_fc4f9f2a-5093-4df0-919f-037e57993a93/kube-rbac-proxy/0.log" Nov 21 15:20:26 crc kubenswrapper[4675]: I1121 15:20:26.545605 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-79fb5496bb-5zhcc_376edcff-4439-418a-80e3-6f6309cdb8f0/manager/0.log" Nov 21 15:20:26 crc kubenswrapper[4675]: I1121 15:20:26.637584 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7fc59d4bfd-8swxd_cfb97bdd-e357-475c-ab5c-184e50acb0dc/manager/0.log" Nov 21 15:20:26 crc kubenswrapper[4675]: I1121 15:20:26.657500 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-55qzb_fc4f9f2a-5093-4df0-919f-037e57993a93/manager/0.log" Nov 21 15:20:43 crc kubenswrapper[4675]: I1121 15:20:43.300823 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7wq8v_90a5318c-96de-40ae-a8f4-87241ab72f28/control-plane-machine-set-operator/0.log" Nov 21 15:20:43 crc kubenswrapper[4675]: I1121 15:20:43.511488 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gwz5c_eb5a0d6b-3347-4d29-90a5-f554c65e5ddb/kube-rbac-proxy/0.log" Nov 21 15:20:43 crc kubenswrapper[4675]: I1121 15:20:43.514216 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gwz5c_eb5a0d6b-3347-4d29-90a5-f554c65e5ddb/machine-api-operator/0.log" Nov 21 15:20:46 crc kubenswrapper[4675]: I1121 15:20:46.136824 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:20:46 crc kubenswrapper[4675]: I1121 15:20:46.137359 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:20:46 crc kubenswrapper[4675]: I1121 15:20:46.137408 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 15:20:46 crc kubenswrapper[4675]: I1121 15:20:46.138251 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:20:46 crc kubenswrapper[4675]: I1121 15:20:46.138306 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" gracePeriod=600 Nov 21 15:20:47 crc kubenswrapper[4675]: E1121 15:20:47.286947 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:20:47 crc kubenswrapper[4675]: I1121 15:20:47.326377 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" exitCode=0 Nov 21 15:20:47 crc kubenswrapper[4675]: I1121 15:20:47.326432 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf"} Nov 21 15:20:47 crc kubenswrapper[4675]: I1121 15:20:47.326475 4675 scope.go:117] "RemoveContainer" containerID="f4a54f482157d9a82e4690ff9119779cc0b9536c7b965dffc48df0e2a1cda567" Nov 21 15:20:47 crc kubenswrapper[4675]: I1121 15:20:47.327372 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:20:47 crc kubenswrapper[4675]: E1121 15:20:47.327731 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:20:55 crc kubenswrapper[4675]: I1121 15:20:55.406903 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-2c6xq_5ffe60a3-3d75-49c3-9340-0680d558e18b/cert-manager-controller/0.log" Nov 21 15:20:55 crc kubenswrapper[4675]: I1121 15:20:55.644714 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-bqb55_054749e0-ba55-43d1-a8d0-3cca3a0b15cf/cert-manager-cainjector/0.log" Nov 21 15:20:55 crc kubenswrapper[4675]: I1121 15:20:55.683748 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-r47qh_0ba7a123-c240-4c8d-bd82-974e63a888cf/cert-manager-webhook/0.log" Nov 21 15:21:00 crc kubenswrapper[4675]: I1121 15:21:00.849972 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:21:00 crc kubenswrapper[4675]: E1121 15:21:00.850911 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:21:07 crc kubenswrapper[4675]: I1121 15:21:07.636926 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-crd2x_4ad50a02-1502-4a0b-8f49-32988242ec6b/nmstate-console-plugin/0.log" Nov 21 15:21:07 crc kubenswrapper[4675]: I1121 15:21:07.846919 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qvplj_de0ab26e-1ac3-48eb-9647-f55c0249b9ec/nmstate-handler/0.log" Nov 21 15:21:07 crc kubenswrapper[4675]: I1121 15:21:07.868793 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-4m8jq_32581066-5208-499f-8473-d7002fd31dca/kube-rbac-proxy/0.log" Nov 21 15:21:07 crc kubenswrapper[4675]: I1121 15:21:07.894635 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-4m8jq_32581066-5208-499f-8473-d7002fd31dca/nmstate-metrics/0.log" Nov 21 15:21:08 crc kubenswrapper[4675]: I1121 15:21:08.045000 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-2hlgb_83e5bca9-014a-43f4-8b6b-f4a4052ed662/nmstate-operator/0.log" Nov 21 15:21:08 crc kubenswrapper[4675]: I1121 15:21:08.104618 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-9ztsm_f304a655-1aaa-43a3-81c1-32e5214c02cf/nmstate-webhook/0.log" Nov 21 15:21:14 crc kubenswrapper[4675]: I1121 15:21:14.859298 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:21:14 crc kubenswrapper[4675]: E1121 15:21:14.860870 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:21:19 crc kubenswrapper[4675]: I1121 15:21:19.924176 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-58d5765bd4-29h67_e7c19bc7-9927-4cb7-98e2-2f834e3ff496/kube-rbac-proxy/0.log" Nov 21 15:21:19 crc kubenswrapper[4675]: I1121 15:21:19.940235 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-58d5765bd4-29h67_e7c19bc7-9927-4cb7-98e2-2f834e3ff496/manager/0.log" Nov 21 15:21:22 crc kubenswrapper[4675]: I1121 15:21:22.920305 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5cb8d89bd7-jsjdv" podUID="35b58484-6cb2-4edc-bea9-4d3a8d6b1479" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 21 15:21:27 crc kubenswrapper[4675]: I1121 15:21:27.849526 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:21:27 crc kubenswrapper[4675]: E1121 15:21:27.850440 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:21:34 crc kubenswrapper[4675]: I1121 15:21:34.710992 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-5blzb_91cca7cf-0a78-48e9-80ca-7d7c7e93d0da/cluster-logging-operator/0.log" Nov 21 15:21:34 crc kubenswrapper[4675]: I1121 15:21:34.879262 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-4nf2w_bbab1657-ebec-4d72-92b4-765a9fb4bd21/collector/0.log" Nov 21 15:21:34 crc kubenswrapper[4675]: I1121 15:21:34.952150 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_69149637-8974-41a7-b494-3db4c647e9de/loki-compactor/0.log" Nov 21 15:21:35 crc kubenswrapper[4675]: I1121 15:21:35.080412 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-czh7n_15313a35-1860-458c-9520-8eb44937ad1d/loki-distributor/0.log" Nov 21 15:21:35 crc kubenswrapper[4675]: I1121 15:21:35.142335 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6b7bc6b4d8-mpc5k_d8711ff7-1164-4f51-9748-d563536a90d3/gateway/0.log" Nov 21 15:21:35 crc kubenswrapper[4675]: I1121 15:21:35.209530 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6b7bc6b4d8-mpc5k_d8711ff7-1164-4f51-9748-d563536a90d3/opa/0.log" Nov 21 15:21:35 crc kubenswrapper[4675]: I1121 15:21:35.366485 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6b7bc6b4d8-pflft_3c26c6d8-717f-4d7d-9a42-bdb65213fe5c/gateway/0.log" Nov 21 15:21:35 crc kubenswrapper[4675]: I1121 15:21:35.379110 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6b7bc6b4d8-pflft_3c26c6d8-717f-4d7d-9a42-bdb65213fe5c/opa/0.log" Nov 21 15:21:35 crc kubenswrapper[4675]: I1121 15:21:35.576746 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_99ed9558-944c-4917-9daf-657bc7f2cbf1/loki-index-gateway/0.log" Nov 21 15:21:35 crc kubenswrapper[4675]: I1121 15:21:35.667968 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_f7b8a2bc-e416-4521-9fa2-44dd6bd69400/loki-ingester/0.log" Nov 21 15:21:35 crc kubenswrapper[4675]: I1121 15:21:35.794723 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-tdn52_fcc0cd18-60e3-4d70-8504-a0987a0cea4e/loki-querier/0.log" Nov 21 15:21:35 crc kubenswrapper[4675]: I1121 15:21:35.882085 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-65gdq_c804a918-f222-49f2-87b7-14b0dae0d37f/loki-query-frontend/0.log" Nov 21 15:21:38 crc kubenswrapper[4675]: I1121 15:21:38.849725 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:21:38 crc kubenswrapper[4675]: E1121 15:21:38.850385 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:21:49 crc kubenswrapper[4675]: I1121 15:21:49.352149 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-krt5f_b076ee09-1376-4f8e-a15f-0b42e2b163d2/kube-rbac-proxy/0.log" Nov 21 15:21:49 crc kubenswrapper[4675]: I1121 15:21:49.581890 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-krt5f_b076ee09-1376-4f8e-a15f-0b42e2b163d2/controller/0.log" Nov 21 15:21:49 crc kubenswrapper[4675]: I1121 15:21:49.675337 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-frr-files/0.log" Nov 21 15:21:49 crc kubenswrapper[4675]: I1121 15:21:49.942889 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-frr-files/0.log" Nov 21 15:21:49 crc kubenswrapper[4675]: I1121 15:21:49.945405 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-reloader/0.log" Nov 21 15:21:49 crc kubenswrapper[4675]: I1121 15:21:49.953665 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-reloader/0.log" Nov 21 15:21:49 crc kubenswrapper[4675]: I1121 15:21:49.973444 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-metrics/0.log" Nov 21 15:21:50 crc kubenswrapper[4675]: I1121 15:21:50.197155 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-metrics/0.log" Nov 21 15:21:50 crc kubenswrapper[4675]: I1121 15:21:50.209337 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-reloader/0.log" Nov 21 15:21:50 crc kubenswrapper[4675]: I1121 15:21:50.223000 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-frr-files/0.log" Nov 21 15:21:50 crc kubenswrapper[4675]: I1121 15:21:50.231011 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-metrics/0.log" Nov 21 15:21:50 crc kubenswrapper[4675]: I1121 15:21:50.448973 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-frr-files/0.log" Nov 21 15:21:50 crc kubenswrapper[4675]: I1121 15:21:50.452124 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-reloader/0.log" Nov 21 15:21:50 crc kubenswrapper[4675]: I1121 15:21:50.462634 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/cp-metrics/0.log" Nov 21 15:21:50 crc kubenswrapper[4675]: I1121 15:21:50.483026 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/controller/0.log" Nov 21 15:21:50 crc kubenswrapper[4675]: I1121 15:21:50.659082 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/kube-rbac-proxy/0.log" Nov 21 15:21:50 crc kubenswrapper[4675]: I1121 15:21:50.690476 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/frr-metrics/0.log" Nov 21 15:21:50 crc kubenswrapper[4675]: I1121 15:21:50.737976 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/kube-rbac-proxy-frr/0.log" Nov 21 15:21:50 crc kubenswrapper[4675]: I1121 15:21:50.850937 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:21:50 crc kubenswrapper[4675]: E1121 15:21:50.851383 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:21:50 crc kubenswrapper[4675]: I1121 15:21:50.932232 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/reloader/0.log" Nov 21 15:21:50 crc kubenswrapper[4675]: I1121 15:21:50.955860 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-p7bcc_046b1803-3201-4c23-bb9d-2cca261bdda0/frr-k8s-webhook-server/0.log" Nov 21 15:21:51 crc kubenswrapper[4675]: I1121 15:21:51.309532 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-b6c7d7f4-57m9x_c3c93fe3-f02a-4ffa-9f4a-0a69271d0edd/manager/0.log" Nov 21 15:21:51 crc kubenswrapper[4675]: I1121 15:21:51.397969 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6bbc7fcc74-d58sq_21f16da1-dc0f-421b-b6f2-13c658268ae7/webhook-server/0.log" Nov 21 15:21:51 crc kubenswrapper[4675]: I1121 15:21:51.575902 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-plc79_22f88730-5c3f-4c5d-a223-be8170e96588/kube-rbac-proxy/0.log" Nov 21 15:21:52 crc kubenswrapper[4675]: I1121 15:21:52.319100 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-plc79_22f88730-5c3f-4c5d-a223-be8170e96588/speaker/0.log" Nov 21 15:21:52 crc kubenswrapper[4675]: I1121 15:21:52.774991 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-896kh_afecd2d7-f280-48fd-b79e-eec3a7ee36f1/frr/0.log" Nov 21 15:22:02 crc kubenswrapper[4675]: I1121 15:22:02.849361 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:22:02 crc kubenswrapper[4675]: E1121 15:22:02.850330 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:22:05 crc kubenswrapper[4675]: I1121 15:22:05.527351 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv_7fcb22a1-888b-416d-b2fe-22eb8cdc928b/util/0.log" Nov 21 15:22:05 crc kubenswrapper[4675]: I1121 15:22:05.762608 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv_7fcb22a1-888b-416d-b2fe-22eb8cdc928b/pull/0.log" Nov 21 15:22:05 crc kubenswrapper[4675]: I1121 15:22:05.804538 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv_7fcb22a1-888b-416d-b2fe-22eb8cdc928b/util/0.log" Nov 21 15:22:05 crc kubenswrapper[4675]: I1121 15:22:05.811679 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv_7fcb22a1-888b-416d-b2fe-22eb8cdc928b/pull/0.log" Nov 21 15:22:05 crc kubenswrapper[4675]: I1121 15:22:05.999363 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv_7fcb22a1-888b-416d-b2fe-22eb8cdc928b/extract/0.log" Nov 21 15:22:06 crc kubenswrapper[4675]: I1121 15:22:06.018244 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv_7fcb22a1-888b-416d-b2fe-22eb8cdc928b/pull/0.log" Nov 21 15:22:06 crc kubenswrapper[4675]: I1121 15:22:06.046869 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t7kzv_7fcb22a1-888b-416d-b2fe-22eb8cdc928b/util/0.log" Nov 21 15:22:06 crc kubenswrapper[4675]: I1121 15:22:06.242653 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn_54bee570-2720-4507-89bf-23f1095205a2/util/0.log" Nov 21 15:22:06 crc kubenswrapper[4675]: I1121 15:22:06.621916 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn_54bee570-2720-4507-89bf-23f1095205a2/util/0.log" Nov 21 15:22:06 crc kubenswrapper[4675]: I1121 15:22:06.642266 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn_54bee570-2720-4507-89bf-23f1095205a2/pull/0.log" Nov 21 15:22:06 crc kubenswrapper[4675]: I1121 15:22:06.713856 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn_54bee570-2720-4507-89bf-23f1095205a2/pull/0.log" Nov 21 15:22:06 crc kubenswrapper[4675]: I1121 15:22:06.850672 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn_54bee570-2720-4507-89bf-23f1095205a2/pull/0.log" Nov 21 15:22:06 crc kubenswrapper[4675]: I1121 15:22:06.869820 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn_54bee570-2720-4507-89bf-23f1095205a2/util/0.log" Nov 21 15:22:06 crc kubenswrapper[4675]: I1121 15:22:06.929937 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772errwgn_54bee570-2720-4507-89bf-23f1095205a2/extract/0.log" Nov 21 15:22:07 crc kubenswrapper[4675]: I1121 15:22:07.066593 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn_4268fd55-2e1a-4ce3-b168-61ae292f22b9/util/0.log" Nov 21 15:22:07 crc kubenswrapper[4675]: I1121 15:22:07.276964 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn_4268fd55-2e1a-4ce3-b168-61ae292f22b9/util/0.log" Nov 21 15:22:07 crc kubenswrapper[4675]: I1121 15:22:07.302553 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn_4268fd55-2e1a-4ce3-b168-61ae292f22b9/pull/0.log" Nov 21 15:22:07 crc kubenswrapper[4675]: I1121 15:22:07.312495 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn_4268fd55-2e1a-4ce3-b168-61ae292f22b9/pull/0.log" Nov 21 15:22:07 crc kubenswrapper[4675]: I1121 15:22:07.523216 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn_4268fd55-2e1a-4ce3-b168-61ae292f22b9/extract/0.log" Nov 21 15:22:07 crc kubenswrapper[4675]: I1121 15:22:07.544576 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn_4268fd55-2e1a-4ce3-b168-61ae292f22b9/util/0.log" Nov 21 15:22:07 crc kubenswrapper[4675]: I1121 15:22:07.567989 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105btbn_4268fd55-2e1a-4ce3-b168-61ae292f22b9/pull/0.log" Nov 21 15:22:07 crc kubenswrapper[4675]: I1121 15:22:07.730198 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9_c495bdbc-57d1-4c92-8276-2769d303f189/util/0.log" Nov 21 15:22:07 crc kubenswrapper[4675]: I1121 15:22:07.873878 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9_c495bdbc-57d1-4c92-8276-2769d303f189/util/0.log" Nov 21 15:22:07 crc kubenswrapper[4675]: I1121 15:22:07.902638 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9_c495bdbc-57d1-4c92-8276-2769d303f189/pull/0.log" Nov 21 15:22:07 crc kubenswrapper[4675]: I1121 15:22:07.945955 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9_c495bdbc-57d1-4c92-8276-2769d303f189/pull/0.log" Nov 21 15:22:08 crc kubenswrapper[4675]: I1121 15:22:08.116913 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9_c495bdbc-57d1-4c92-8276-2769d303f189/util/0.log" Nov 21 15:22:08 crc kubenswrapper[4675]: I1121 15:22:08.142686 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9_c495bdbc-57d1-4c92-8276-2769d303f189/extract/0.log" Nov 21 15:22:08 crc kubenswrapper[4675]: I1121 15:22:08.173398 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fv56f9_c495bdbc-57d1-4c92-8276-2769d303f189/pull/0.log" Nov 21 15:22:08 crc kubenswrapper[4675]: I1121 15:22:08.299002 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tmbz4_da963da7-38b3-45bf-88ba-54b6e6b9a58f/extract-utilities/0.log" Nov 21 15:22:08 crc kubenswrapper[4675]: I1121 15:22:08.520669 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tmbz4_da963da7-38b3-45bf-88ba-54b6e6b9a58f/extract-utilities/0.log" Nov 21 15:22:08 crc kubenswrapper[4675]: I1121 15:22:08.528751 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tmbz4_da963da7-38b3-45bf-88ba-54b6e6b9a58f/extract-content/0.log" Nov 21 15:22:08 crc kubenswrapper[4675]: I1121 15:22:08.555039 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tmbz4_da963da7-38b3-45bf-88ba-54b6e6b9a58f/extract-content/0.log" Nov 21 15:22:08 crc kubenswrapper[4675]: I1121 15:22:08.753797 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tmbz4_da963da7-38b3-45bf-88ba-54b6e6b9a58f/extract-utilities/0.log" Nov 21 15:22:08 crc kubenswrapper[4675]: I1121 15:22:08.764826 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tmbz4_da963da7-38b3-45bf-88ba-54b6e6b9a58f/extract-content/0.log" Nov 21 15:22:08 crc kubenswrapper[4675]: I1121 15:22:08.987549 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pwjwv_1c7213f9-3076-4d37-9803-1156edec2aaa/extract-utilities/0.log" Nov 21 15:22:09 crc kubenswrapper[4675]: I1121 15:22:09.237053 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pwjwv_1c7213f9-3076-4d37-9803-1156edec2aaa/extract-content/0.log" Nov 21 15:22:09 crc kubenswrapper[4675]: I1121 15:22:09.302139 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pwjwv_1c7213f9-3076-4d37-9803-1156edec2aaa/extract-content/0.log" Nov 21 15:22:09 crc kubenswrapper[4675]: I1121 15:22:09.307237 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pwjwv_1c7213f9-3076-4d37-9803-1156edec2aaa/extract-utilities/0.log" Nov 21 15:22:09 crc kubenswrapper[4675]: I1121 15:22:09.537186 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pwjwv_1c7213f9-3076-4d37-9803-1156edec2aaa/extract-content/0.log" Nov 21 15:22:09 crc kubenswrapper[4675]: I1121 15:22:09.573336 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pwjwv_1c7213f9-3076-4d37-9803-1156edec2aaa/extract-utilities/0.log" Nov 21 15:22:09 crc kubenswrapper[4675]: I1121 15:22:09.772965 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8_f5500da3-4dcd-4802-86a8-f473843eebe4/util/0.log" Nov 21 15:22:09 crc kubenswrapper[4675]: I1121 15:22:09.926359 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8_f5500da3-4dcd-4802-86a8-f473843eebe4/pull/0.log" Nov 21 15:22:10 crc kubenswrapper[4675]: I1121 15:22:10.021334 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8_f5500da3-4dcd-4802-86a8-f473843eebe4/util/0.log" Nov 21 15:22:10 crc kubenswrapper[4675]: I1121 15:22:10.024189 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8_f5500da3-4dcd-4802-86a8-f473843eebe4/pull/0.log" Nov 21 15:22:10 crc kubenswrapper[4675]: I1121 15:22:10.297815 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8_f5500da3-4dcd-4802-86a8-f473843eebe4/util/0.log" Nov 21 15:22:10 crc kubenswrapper[4675]: I1121 15:22:10.334941 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8_f5500da3-4dcd-4802-86a8-f473843eebe4/pull/0.log" Nov 21 15:22:10 crc kubenswrapper[4675]: I1121 15:22:10.365217 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c64d5q8_f5500da3-4dcd-4802-86a8-f473843eebe4/extract/0.log" Nov 21 15:22:10 crc kubenswrapper[4675]: I1121 15:22:10.756936 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tmbz4_da963da7-38b3-45bf-88ba-54b6e6b9a58f/registry-server/0.log" Nov 21 15:22:10 crc kubenswrapper[4675]: I1121 15:22:10.897478 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6q9sj_6a21daba-95a0-4f20-91b5-de4dc44aa0b1/marketplace-operator/0.log" Nov 21 15:22:10 crc kubenswrapper[4675]: I1121 15:22:10.964599 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pwjwv_1c7213f9-3076-4d37-9803-1156edec2aaa/registry-server/0.log" Nov 21 15:22:11 crc kubenswrapper[4675]: I1121 15:22:11.023638 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pk6br_1d54f5b5-d5db-4104-9f8b-072086f8f9a4/extract-utilities/0.log" Nov 21 15:22:11 crc kubenswrapper[4675]: I1121 15:22:11.125858 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pk6br_1d54f5b5-d5db-4104-9f8b-072086f8f9a4/extract-utilities/0.log" Nov 21 15:22:11 crc kubenswrapper[4675]: I1121 15:22:11.162694 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pk6br_1d54f5b5-d5db-4104-9f8b-072086f8f9a4/extract-content/0.log" Nov 21 15:22:11 crc kubenswrapper[4675]: I1121 15:22:11.166101 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pk6br_1d54f5b5-d5db-4104-9f8b-072086f8f9a4/extract-content/0.log" Nov 21 15:22:11 crc kubenswrapper[4675]: I1121 15:22:11.338749 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pk6br_1d54f5b5-d5db-4104-9f8b-072086f8f9a4/extract-utilities/0.log" Nov 21 15:22:11 crc kubenswrapper[4675]: I1121 15:22:11.372666 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pk6br_1d54f5b5-d5db-4104-9f8b-072086f8f9a4/extract-content/0.log" Nov 21 15:22:11 crc kubenswrapper[4675]: I1121 15:22:11.405655 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zm72j_3f6b3f8e-0776-47f2-bbe4-ed0d6af49813/extract-utilities/0.log" Nov 21 15:22:11 crc kubenswrapper[4675]: I1121 15:22:11.618606 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pk6br_1d54f5b5-d5db-4104-9f8b-072086f8f9a4/registry-server/0.log" Nov 21 15:22:11 crc kubenswrapper[4675]: I1121 15:22:11.635111 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zm72j_3f6b3f8e-0776-47f2-bbe4-ed0d6af49813/extract-content/0.log" Nov 21 15:22:11 crc kubenswrapper[4675]: I1121 15:22:11.642858 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zm72j_3f6b3f8e-0776-47f2-bbe4-ed0d6af49813/extract-content/0.log" Nov 21 15:22:11 crc kubenswrapper[4675]: I1121 15:22:11.659793 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zm72j_3f6b3f8e-0776-47f2-bbe4-ed0d6af49813/extract-utilities/0.log" Nov 21 15:22:11 crc kubenswrapper[4675]: I1121 15:22:11.829100 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zm72j_3f6b3f8e-0776-47f2-bbe4-ed0d6af49813/extract-utilities/0.log" Nov 21 15:22:11 crc kubenswrapper[4675]: I1121 15:22:11.867420 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zm72j_3f6b3f8e-0776-47f2-bbe4-ed0d6af49813/extract-content/0.log" Nov 21 15:22:12 crc kubenswrapper[4675]: I1121 15:22:12.524582 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zm72j_3f6b3f8e-0776-47f2-bbe4-ed0d6af49813/registry-server/0.log" Nov 21 15:22:14 crc kubenswrapper[4675]: I1121 15:22:14.849121 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:22:14 crc kubenswrapper[4675]: E1121 15:22:14.849946 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:22:23 crc kubenswrapper[4675]: I1121 15:22:23.538737 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-mx844_ce64e510-eca3-48f5-858d-165c3d3cfba7/prometheus-operator/0.log" Nov 21 15:22:23 crc kubenswrapper[4675]: I1121 15:22:23.685060 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7fcccb4b7c-pnqpf_320effcd-ec3e-4741-b3d9-e0ec17502e50/prometheus-operator-admission-webhook/0.log" Nov 21 15:22:23 crc kubenswrapper[4675]: I1121 15:22:23.726977 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7fcccb4b7c-wrv22_1a50974d-f334-4845-b892-5e4b97fc3d79/prometheus-operator-admission-webhook/0.log" Nov 21 15:22:23 crc kubenswrapper[4675]: I1121 15:22:23.909770 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-mdzgh_1d6f8b49-15cf-404d-8bda-1ae7a7292d2b/observability-ui-dashboards/0.log" Nov 21 15:22:23 crc kubenswrapper[4675]: I1121 15:22:23.927856 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-cb77k_57e668d4-e3df-4b36-ad58-51e5b7f2d16e/operator/0.log" Nov 21 15:22:24 crc kubenswrapper[4675]: I1121 15:22:24.081539 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-67wfr_01c95951-b168-42ed-aab7-9ffe813b6d55/perses-operator/0.log" Nov 21 15:22:25 crc kubenswrapper[4675]: I1121 15:22:25.850086 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:22:25 crc kubenswrapper[4675]: E1121 15:22:25.850831 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:22:36 crc kubenswrapper[4675]: I1121 15:22:36.183210 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-58d5765bd4-29h67_e7c19bc7-9927-4cb7-98e2-2f834e3ff496/kube-rbac-proxy/0.log" Nov 21 15:22:36 crc kubenswrapper[4675]: I1121 15:22:36.240505 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-58d5765bd4-29h67_e7c19bc7-9927-4cb7-98e2-2f834e3ff496/manager/0.log" Nov 21 15:22:37 crc kubenswrapper[4675]: I1121 15:22:37.850527 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:22:37 crc kubenswrapper[4675]: E1121 15:22:37.851257 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:22:52 crc kubenswrapper[4675]: I1121 15:22:52.849056 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:22:52 crc kubenswrapper[4675]: E1121 15:22:52.850647 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:23:07 crc kubenswrapper[4675]: I1121 15:23:07.849358 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:23:07 crc kubenswrapper[4675]: E1121 15:23:07.850407 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:23:18 crc kubenswrapper[4675]: I1121 15:23:18.849226 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:23:18 crc kubenswrapper[4675]: E1121 15:23:18.850386 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:23:30 crc kubenswrapper[4675]: I1121 15:23:30.850195 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:23:30 crc kubenswrapper[4675]: E1121 15:23:30.851325 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:23:42 crc kubenswrapper[4675]: I1121 15:23:42.850027 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:23:42 crc kubenswrapper[4675]: E1121 15:23:42.850871 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:23:54 crc kubenswrapper[4675]: I1121 15:23:54.867533 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:23:54 crc kubenswrapper[4675]: E1121 15:23:54.870848 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:24:06 crc kubenswrapper[4675]: I1121 15:24:06.849907 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:24:06 crc kubenswrapper[4675]: E1121 15:24:06.850836 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:24:20 crc kubenswrapper[4675]: I1121 15:24:20.849292 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:24:20 crc kubenswrapper[4675]: E1121 15:24:20.850474 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:24:34 crc kubenswrapper[4675]: I1121 15:24:34.218804 4675 scope.go:117] "RemoveContainer" containerID="94b0942984e27f88d6ba5619e95d632b1b2c512464dce0d362406ac568a203c1" Nov 21 15:24:35 crc kubenswrapper[4675]: I1121 15:24:35.849419 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:24:35 crc kubenswrapper[4675]: E1121 15:24:35.850386 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:24:37 crc kubenswrapper[4675]: I1121 15:24:37.047131 4675 generic.go:334] "Generic (PLEG): container finished" podID="e596a80c-ed6e-43ca-9936-964085f6614e" containerID="bda1d5b1756ecbb53908eb8d8cda945d4c8521f3148ac5b83205366631721279" exitCode=0 Nov 21 15:24:37 crc kubenswrapper[4675]: I1121 15:24:37.047204 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57qhv/must-gather-lr6sm" event={"ID":"e596a80c-ed6e-43ca-9936-964085f6614e","Type":"ContainerDied","Data":"bda1d5b1756ecbb53908eb8d8cda945d4c8521f3148ac5b83205366631721279"} Nov 21 15:24:37 crc kubenswrapper[4675]: I1121 15:24:37.048390 4675 scope.go:117] "RemoveContainer" containerID="bda1d5b1756ecbb53908eb8d8cda945d4c8521f3148ac5b83205366631721279" Nov 21 15:24:37 crc kubenswrapper[4675]: I1121 15:24:37.919787 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-57qhv_must-gather-lr6sm_e596a80c-ed6e-43ca-9936-964085f6614e/gather/0.log" Nov 21 15:24:48 crc kubenswrapper[4675]: I1121 15:24:48.848924 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:24:48 crc kubenswrapper[4675]: E1121 15:24:48.849768 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:24:50 crc kubenswrapper[4675]: I1121 15:24:50.616291 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-57qhv/must-gather-lr6sm"] Nov 21 15:24:50 crc kubenswrapper[4675]: I1121 15:24:50.617100 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-57qhv/must-gather-lr6sm" podUID="e596a80c-ed6e-43ca-9936-964085f6614e" containerName="copy" containerID="cri-o://c2c065d0709c70e279f1e8d328b448b6028d8c5fa3235d9db02543f6fda1a3c4" gracePeriod=2 Nov 21 15:24:50 crc kubenswrapper[4675]: I1121 15:24:50.627645 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-57qhv/must-gather-lr6sm"] Nov 21 15:24:51 crc kubenswrapper[4675]: I1121 15:24:51.204533 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-57qhv_must-gather-lr6sm_e596a80c-ed6e-43ca-9936-964085f6614e/copy/0.log" Nov 21 15:24:51 crc kubenswrapper[4675]: I1121 15:24:51.205678 4675 generic.go:334] "Generic (PLEG): container finished" podID="e596a80c-ed6e-43ca-9936-964085f6614e" containerID="c2c065d0709c70e279f1e8d328b448b6028d8c5fa3235d9db02543f6fda1a3c4" exitCode=143 Nov 21 15:24:53 crc kubenswrapper[4675]: I1121 15:24:53.073882 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-57qhv_must-gather-lr6sm_e596a80c-ed6e-43ca-9936-964085f6614e/copy/0.log" Nov 21 15:24:53 crc kubenswrapper[4675]: I1121 15:24:53.076305 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/must-gather-lr6sm" Nov 21 15:24:53 crc kubenswrapper[4675]: I1121 15:24:53.136996 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x77j7\" (UniqueName: \"kubernetes.io/projected/e596a80c-ed6e-43ca-9936-964085f6614e-kube-api-access-x77j7\") pod \"e596a80c-ed6e-43ca-9936-964085f6614e\" (UID: \"e596a80c-ed6e-43ca-9936-964085f6614e\") " Nov 21 15:24:53 crc kubenswrapper[4675]: I1121 15:24:53.137250 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e596a80c-ed6e-43ca-9936-964085f6614e-must-gather-output\") pod \"e596a80c-ed6e-43ca-9936-964085f6614e\" (UID: \"e596a80c-ed6e-43ca-9936-964085f6614e\") " Nov 21 15:24:53 crc kubenswrapper[4675]: I1121 15:24:53.163886 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e596a80c-ed6e-43ca-9936-964085f6614e-kube-api-access-x77j7" (OuterVolumeSpecName: "kube-api-access-x77j7") pod "e596a80c-ed6e-43ca-9936-964085f6614e" (UID: "e596a80c-ed6e-43ca-9936-964085f6614e"). InnerVolumeSpecName "kube-api-access-x77j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:24:53 crc kubenswrapper[4675]: I1121 15:24:53.229143 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-57qhv_must-gather-lr6sm_e596a80c-ed6e-43ca-9936-964085f6614e/copy/0.log" Nov 21 15:24:53 crc kubenswrapper[4675]: I1121 15:24:53.229755 4675 scope.go:117] "RemoveContainer" containerID="c2c065d0709c70e279f1e8d328b448b6028d8c5fa3235d9db02543f6fda1a3c4" Nov 21 15:24:53 crc kubenswrapper[4675]: I1121 15:24:53.229809 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57qhv/must-gather-lr6sm" Nov 21 15:24:53 crc kubenswrapper[4675]: I1121 15:24:53.240135 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x77j7\" (UniqueName: \"kubernetes.io/projected/e596a80c-ed6e-43ca-9936-964085f6614e-kube-api-access-x77j7\") on node \"crc\" DevicePath \"\"" Nov 21 15:24:53 crc kubenswrapper[4675]: I1121 15:24:53.253571 4675 scope.go:117] "RemoveContainer" containerID="bda1d5b1756ecbb53908eb8d8cda945d4c8521f3148ac5b83205366631721279" Nov 21 15:24:53 crc kubenswrapper[4675]: I1121 15:24:53.317798 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e596a80c-ed6e-43ca-9936-964085f6614e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e596a80c-ed6e-43ca-9936-964085f6614e" (UID: "e596a80c-ed6e-43ca-9936-964085f6614e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:24:53 crc kubenswrapper[4675]: I1121 15:24:53.341845 4675 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e596a80c-ed6e-43ca-9936-964085f6614e-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 21 15:24:54 crc kubenswrapper[4675]: I1121 15:24:54.863484 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e596a80c-ed6e-43ca-9936-964085f6614e" path="/var/lib/kubelet/pods/e596a80c-ed6e-43ca-9936-964085f6614e/volumes" Nov 21 15:25:03 crc kubenswrapper[4675]: I1121 15:25:03.849849 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:25:03 crc kubenswrapper[4675]: E1121 15:25:03.850736 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:25:16 crc kubenswrapper[4675]: I1121 15:25:16.849148 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:25:16 crc kubenswrapper[4675]: E1121 15:25:16.850104 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.101031 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nlbwc"] Nov 21 15:25:19 crc kubenswrapper[4675]: E1121 15:25:19.101908 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cfee818-d0a1-468e-94d2-237d75fed9f5" containerName="container-00" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.101925 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cfee818-d0a1-468e-94d2-237d75fed9f5" containerName="container-00" Nov 21 15:25:19 crc kubenswrapper[4675]: E1121 15:25:19.101981 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e596a80c-ed6e-43ca-9936-964085f6614e" containerName="copy" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.101992 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e596a80c-ed6e-43ca-9936-964085f6614e" containerName="copy" Nov 21 15:25:19 crc kubenswrapper[4675]: E1121 15:25:19.102013 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e596a80c-ed6e-43ca-9936-964085f6614e" containerName="gather" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.102021 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e596a80c-ed6e-43ca-9936-964085f6614e" containerName="gather" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.102336 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e596a80c-ed6e-43ca-9936-964085f6614e" containerName="copy" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.102388 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cfee818-d0a1-468e-94d2-237d75fed9f5" containerName="container-00" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.102400 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e596a80c-ed6e-43ca-9936-964085f6614e" containerName="gather" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.104635 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.123414 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nlbwc"] Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.226721 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps2t9\" (UniqueName: \"kubernetes.io/projected/32ff446f-445c-45e9-94aa-21868bff8336-kube-api-access-ps2t9\") pod \"community-operators-nlbwc\" (UID: \"32ff446f-445c-45e9-94aa-21868bff8336\") " pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.226854 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ff446f-445c-45e9-94aa-21868bff8336-utilities\") pod \"community-operators-nlbwc\" (UID: \"32ff446f-445c-45e9-94aa-21868bff8336\") " pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.226876 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ff446f-445c-45e9-94aa-21868bff8336-catalog-content\") pod \"community-operators-nlbwc\" (UID: \"32ff446f-445c-45e9-94aa-21868bff8336\") " pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.329594 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps2t9\" (UniqueName: \"kubernetes.io/projected/32ff446f-445c-45e9-94aa-21868bff8336-kube-api-access-ps2t9\") pod \"community-operators-nlbwc\" (UID: \"32ff446f-445c-45e9-94aa-21868bff8336\") " pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.330029 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ff446f-445c-45e9-94aa-21868bff8336-utilities\") pod \"community-operators-nlbwc\" (UID: \"32ff446f-445c-45e9-94aa-21868bff8336\") " pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.330050 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ff446f-445c-45e9-94aa-21868bff8336-catalog-content\") pod \"community-operators-nlbwc\" (UID: \"32ff446f-445c-45e9-94aa-21868bff8336\") " pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.330894 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ff446f-445c-45e9-94aa-21868bff8336-utilities\") pod \"community-operators-nlbwc\" (UID: \"32ff446f-445c-45e9-94aa-21868bff8336\") " pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.330899 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ff446f-445c-45e9-94aa-21868bff8336-catalog-content\") pod \"community-operators-nlbwc\" (UID: \"32ff446f-445c-45e9-94aa-21868bff8336\") " pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.359123 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps2t9\" (UniqueName: \"kubernetes.io/projected/32ff446f-445c-45e9-94aa-21868bff8336-kube-api-access-ps2t9\") pod \"community-operators-nlbwc\" (UID: \"32ff446f-445c-45e9-94aa-21868bff8336\") " pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:19 crc kubenswrapper[4675]: I1121 15:25:19.443457 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:20 crc kubenswrapper[4675]: I1121 15:25:20.051010 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nlbwc"] Nov 21 15:25:20 crc kubenswrapper[4675]: I1121 15:25:20.532730 4675 generic.go:334] "Generic (PLEG): container finished" podID="32ff446f-445c-45e9-94aa-21868bff8336" containerID="5b39d5bd710fe6e434f3bf65b1fd53b78674a67dc37b3bbda37f4bc34803c67b" exitCode=0 Nov 21 15:25:20 crc kubenswrapper[4675]: I1121 15:25:20.532778 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlbwc" event={"ID":"32ff446f-445c-45e9-94aa-21868bff8336","Type":"ContainerDied","Data":"5b39d5bd710fe6e434f3bf65b1fd53b78674a67dc37b3bbda37f4bc34803c67b"} Nov 21 15:25:20 crc kubenswrapper[4675]: I1121 15:25:20.534825 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlbwc" event={"ID":"32ff446f-445c-45e9-94aa-21868bff8336","Type":"ContainerStarted","Data":"10b2024d9441516d28cc9a1819173333719635507788d35fd1d185d088e77910"} Nov 21 15:25:20 crc kubenswrapper[4675]: I1121 15:25:20.536035 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 15:25:20 crc kubenswrapper[4675]: E1121 15:25:20.683013 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32ff446f_445c_45e9_94aa_21868bff8336.slice/crio-conmon-5b39d5bd710fe6e434f3bf65b1fd53b78674a67dc37b3bbda37f4bc34803c67b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32ff446f_445c_45e9_94aa_21868bff8336.slice/crio-5b39d5bd710fe6e434f3bf65b1fd53b78674a67dc37b3bbda37f4bc34803c67b.scope\": RecentStats: unable to find data in memory cache]" Nov 21 15:25:22 crc kubenswrapper[4675]: I1121 15:25:22.557989 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlbwc" event={"ID":"32ff446f-445c-45e9-94aa-21868bff8336","Type":"ContainerStarted","Data":"655a198078fcee85dc97d23034518778f3882968ea00654c9af1a0fdbc5db669"} Nov 21 15:25:23 crc kubenswrapper[4675]: I1121 15:25:23.273693 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g5zkd"] Nov 21 15:25:23 crc kubenswrapper[4675]: I1121 15:25:23.277323 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:23 crc kubenswrapper[4675]: I1121 15:25:23.299603 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5zkd"] Nov 21 15:25:23 crc kubenswrapper[4675]: I1121 15:25:23.432421 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr296\" (UniqueName: \"kubernetes.io/projected/c303aff3-8e3e-4327-b4d0-9838a3b0373d-kube-api-access-wr296\") pod \"redhat-marketplace-g5zkd\" (UID: \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\") " pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:23 crc kubenswrapper[4675]: I1121 15:25:23.432957 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c303aff3-8e3e-4327-b4d0-9838a3b0373d-utilities\") pod \"redhat-marketplace-g5zkd\" (UID: \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\") " pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:23 crc kubenswrapper[4675]: I1121 15:25:23.433312 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c303aff3-8e3e-4327-b4d0-9838a3b0373d-catalog-content\") pod \"redhat-marketplace-g5zkd\" (UID: \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\") " pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:23 crc kubenswrapper[4675]: I1121 15:25:23.535858 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c303aff3-8e3e-4327-b4d0-9838a3b0373d-catalog-content\") pod \"redhat-marketplace-g5zkd\" (UID: \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\") " pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:23 crc kubenswrapper[4675]: I1121 15:25:23.535960 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr296\" (UniqueName: \"kubernetes.io/projected/c303aff3-8e3e-4327-b4d0-9838a3b0373d-kube-api-access-wr296\") pod \"redhat-marketplace-g5zkd\" (UID: \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\") " pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:23 crc kubenswrapper[4675]: I1121 15:25:23.536095 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c303aff3-8e3e-4327-b4d0-9838a3b0373d-utilities\") pod \"redhat-marketplace-g5zkd\" (UID: \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\") " pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:23 crc kubenswrapper[4675]: I1121 15:25:23.536272 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c303aff3-8e3e-4327-b4d0-9838a3b0373d-catalog-content\") pod \"redhat-marketplace-g5zkd\" (UID: \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\") " pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:23 crc kubenswrapper[4675]: I1121 15:25:23.536526 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c303aff3-8e3e-4327-b4d0-9838a3b0373d-utilities\") pod \"redhat-marketplace-g5zkd\" (UID: \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\") " pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:23 crc kubenswrapper[4675]: I1121 15:25:23.556657 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr296\" (UniqueName: \"kubernetes.io/projected/c303aff3-8e3e-4327-b4d0-9838a3b0373d-kube-api-access-wr296\") pod \"redhat-marketplace-g5zkd\" (UID: \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\") " pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:23 crc kubenswrapper[4675]: I1121 15:25:23.607996 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:24 crc kubenswrapper[4675]: I1121 15:25:24.086529 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5zkd"] Nov 21 15:25:24 crc kubenswrapper[4675]: W1121 15:25:24.089388 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc303aff3_8e3e_4327_b4d0_9838a3b0373d.slice/crio-2e14b11b2e0cc09a3076e58b116c20af6e3fa9bb76c177c7a80d75e0fa9a8acf WatchSource:0}: Error finding container 2e14b11b2e0cc09a3076e58b116c20af6e3fa9bb76c177c7a80d75e0fa9a8acf: Status 404 returned error can't find the container with id 2e14b11b2e0cc09a3076e58b116c20af6e3fa9bb76c177c7a80d75e0fa9a8acf Nov 21 15:25:24 crc kubenswrapper[4675]: I1121 15:25:24.582162 4675 generic.go:334] "Generic (PLEG): container finished" podID="c303aff3-8e3e-4327-b4d0-9838a3b0373d" containerID="23612ce3b089605af2e91c1401f835ac12d60f08500a5002c16545ca95bcccb2" exitCode=0 Nov 21 15:25:24 crc kubenswrapper[4675]: I1121 15:25:24.582202 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5zkd" event={"ID":"c303aff3-8e3e-4327-b4d0-9838a3b0373d","Type":"ContainerDied","Data":"23612ce3b089605af2e91c1401f835ac12d60f08500a5002c16545ca95bcccb2"} Nov 21 15:25:24 crc kubenswrapper[4675]: I1121 15:25:24.582263 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5zkd" event={"ID":"c303aff3-8e3e-4327-b4d0-9838a3b0373d","Type":"ContainerStarted","Data":"2e14b11b2e0cc09a3076e58b116c20af6e3fa9bb76c177c7a80d75e0fa9a8acf"} Nov 21 15:25:26 crc kubenswrapper[4675]: I1121 15:25:26.606582 4675 generic.go:334] "Generic (PLEG): container finished" podID="32ff446f-445c-45e9-94aa-21868bff8336" containerID="655a198078fcee85dc97d23034518778f3882968ea00654c9af1a0fdbc5db669" exitCode=0 Nov 21 15:25:26 crc kubenswrapper[4675]: I1121 15:25:26.607056 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlbwc" event={"ID":"32ff446f-445c-45e9-94aa-21868bff8336","Type":"ContainerDied","Data":"655a198078fcee85dc97d23034518778f3882968ea00654c9af1a0fdbc5db669"} Nov 21 15:25:26 crc kubenswrapper[4675]: I1121 15:25:26.610590 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5zkd" event={"ID":"c303aff3-8e3e-4327-b4d0-9838a3b0373d","Type":"ContainerStarted","Data":"9dc07fe5ff06bc0c2e80d4b4f33c242983d2830b60108f6298608dd83ea0ec60"} Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.629190 4675 generic.go:334] "Generic (PLEG): container finished" podID="c303aff3-8e3e-4327-b4d0-9838a3b0373d" containerID="9dc07fe5ff06bc0c2e80d4b4f33c242983d2830b60108f6298608dd83ea0ec60" exitCode=0 Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.629236 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5zkd" event={"ID":"c303aff3-8e3e-4327-b4d0-9838a3b0373d","Type":"ContainerDied","Data":"9dc07fe5ff06bc0c2e80d4b4f33c242983d2830b60108f6298608dd83ea0ec60"} Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.634999 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlbwc" event={"ID":"32ff446f-445c-45e9-94aa-21868bff8336","Type":"ContainerStarted","Data":"f24ff2eec6d88e87855fe0ce09574422414fb6c58d0dcceab751daab9e14f3cd"} Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.687322 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z9c8h"] Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.690409 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.698410 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nlbwc" podStartSLOduration=2.052204148 podStartE2EDuration="8.69839143s" podCreationTimestamp="2025-11-21 15:25:19 +0000 UTC" firstStartedPulling="2025-11-21 15:25:20.535783451 +0000 UTC m=+6797.262198178" lastFinishedPulling="2025-11-21 15:25:27.181970733 +0000 UTC m=+6803.908385460" observedRunningTime="2025-11-21 15:25:27.669883058 +0000 UTC m=+6804.396297775" watchObservedRunningTime="2025-11-21 15:25:27.69839143 +0000 UTC m=+6804.424806157" Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.712162 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z9c8h"] Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.855492 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-catalog-content\") pod \"certified-operators-z9c8h\" (UID: \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\") " pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.855580 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-utilities\") pod \"certified-operators-z9c8h\" (UID: \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\") " pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.855766 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2s47\" (UniqueName: \"kubernetes.io/projected/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-kube-api-access-g2s47\") pod \"certified-operators-z9c8h\" (UID: \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\") " pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.958387 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-catalog-content\") pod \"certified-operators-z9c8h\" (UID: \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\") " pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.958461 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-utilities\") pod \"certified-operators-z9c8h\" (UID: \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\") " pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.958588 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2s47\" (UniqueName: \"kubernetes.io/projected/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-kube-api-access-g2s47\") pod \"certified-operators-z9c8h\" (UID: \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\") " pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.958987 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-catalog-content\") pod \"certified-operators-z9c8h\" (UID: \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\") " pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.959095 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-utilities\") pod \"certified-operators-z9c8h\" (UID: \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\") " pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:27 crc kubenswrapper[4675]: I1121 15:25:27.979239 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2s47\" (UniqueName: \"kubernetes.io/projected/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-kube-api-access-g2s47\") pod \"certified-operators-z9c8h\" (UID: \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\") " pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:28 crc kubenswrapper[4675]: I1121 15:25:28.016356 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:28 crc kubenswrapper[4675]: I1121 15:25:28.539572 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z9c8h"] Nov 21 15:25:28 crc kubenswrapper[4675]: W1121 15:25:28.540057 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff9b3c70_a1a9_46cf_a4c4_aea18618b082.slice/crio-f32761888e95dac962e25ff2dfa729014ccd133417fa25d4a10d6274226d2aaa WatchSource:0}: Error finding container f32761888e95dac962e25ff2dfa729014ccd133417fa25d4a10d6274226d2aaa: Status 404 returned error can't find the container with id f32761888e95dac962e25ff2dfa729014ccd133417fa25d4a10d6274226d2aaa Nov 21 15:25:28 crc kubenswrapper[4675]: I1121 15:25:28.647268 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9c8h" event={"ID":"ff9b3c70-a1a9-46cf-a4c4-aea18618b082","Type":"ContainerStarted","Data":"f32761888e95dac962e25ff2dfa729014ccd133417fa25d4a10d6274226d2aaa"} Nov 21 15:25:28 crc kubenswrapper[4675]: I1121 15:25:28.649988 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5zkd" event={"ID":"c303aff3-8e3e-4327-b4d0-9838a3b0373d","Type":"ContainerStarted","Data":"36e5f3589f2879deab57fe04e652f190c52faf0905acf7a11868bf451b3689be"} Nov 21 15:25:28 crc kubenswrapper[4675]: I1121 15:25:28.678167 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g5zkd" podStartSLOduration=1.933294054 podStartE2EDuration="5.678132787s" podCreationTimestamp="2025-11-21 15:25:23 +0000 UTC" firstStartedPulling="2025-11-21 15:25:24.584276788 +0000 UTC m=+6801.310691515" lastFinishedPulling="2025-11-21 15:25:28.329115521 +0000 UTC m=+6805.055530248" observedRunningTime="2025-11-21 15:25:28.668660931 +0000 UTC m=+6805.395075668" watchObservedRunningTime="2025-11-21 15:25:28.678132787 +0000 UTC m=+6805.404547514" Nov 21 15:25:29 crc kubenswrapper[4675]: I1121 15:25:29.443747 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:29 crc kubenswrapper[4675]: I1121 15:25:29.444219 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:29 crc kubenswrapper[4675]: I1121 15:25:29.662654 4675 generic.go:334] "Generic (PLEG): container finished" podID="ff9b3c70-a1a9-46cf-a4c4-aea18618b082" containerID="dad8ca738be12fb764598c5e3f22c8eb52c1729f38b3a0636b465626c92d1bab" exitCode=0 Nov 21 15:25:29 crc kubenswrapper[4675]: I1121 15:25:29.662695 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9c8h" event={"ID":"ff9b3c70-a1a9-46cf-a4c4-aea18618b082","Type":"ContainerDied","Data":"dad8ca738be12fb764598c5e3f22c8eb52c1729f38b3a0636b465626c92d1bab"} Nov 21 15:25:30 crc kubenswrapper[4675]: I1121 15:25:30.497754 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nlbwc" podUID="32ff446f-445c-45e9-94aa-21868bff8336" containerName="registry-server" probeResult="failure" output=< Nov 21 15:25:30 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:25:30 crc kubenswrapper[4675]: > Nov 21 15:25:31 crc kubenswrapper[4675]: I1121 15:25:31.850305 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:25:31 crc kubenswrapper[4675]: E1121 15:25:31.850843 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:25:32 crc kubenswrapper[4675]: I1121 15:25:32.700658 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9c8h" event={"ID":"ff9b3c70-a1a9-46cf-a4c4-aea18618b082","Type":"ContainerStarted","Data":"aa7991e50a32a52900fa9c2bc5e6be2fe29abe716d8f3b10c7b6a78613541b10"} Nov 21 15:25:33 crc kubenswrapper[4675]: I1121 15:25:33.608510 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:33 crc kubenswrapper[4675]: I1121 15:25:33.608912 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:33 crc kubenswrapper[4675]: I1121 15:25:33.664904 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:33 crc kubenswrapper[4675]: I1121 15:25:33.782908 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:34 crc kubenswrapper[4675]: I1121 15:25:34.389929 4675 scope.go:117] "RemoveContainer" containerID="dd38028cfec4c47524799347c45054cb2ca82e7b724ec39413ebee33d610b3fc" Nov 21 15:25:35 crc kubenswrapper[4675]: I1121 15:25:35.863509 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5zkd"] Nov 21 15:25:35 crc kubenswrapper[4675]: I1121 15:25:35.864005 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g5zkd" podUID="c303aff3-8e3e-4327-b4d0-9838a3b0373d" containerName="registry-server" containerID="cri-o://36e5f3589f2879deab57fe04e652f190c52faf0905acf7a11868bf451b3689be" gracePeriod=2 Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:36.746900 4675 generic.go:334] "Generic (PLEG): container finished" podID="c303aff3-8e3e-4327-b4d0-9838a3b0373d" containerID="36e5f3589f2879deab57fe04e652f190c52faf0905acf7a11868bf451b3689be" exitCode=0 Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:36.746992 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5zkd" event={"ID":"c303aff3-8e3e-4327-b4d0-9838a3b0373d","Type":"ContainerDied","Data":"36e5f3589f2879deab57fe04e652f190c52faf0905acf7a11868bf451b3689be"} Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:36.901483 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:36.975515 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c303aff3-8e3e-4327-b4d0-9838a3b0373d-utilities\") pod \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\" (UID: \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\") " Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:36.975753 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr296\" (UniqueName: \"kubernetes.io/projected/c303aff3-8e3e-4327-b4d0-9838a3b0373d-kube-api-access-wr296\") pod \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\" (UID: \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\") " Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:36.975806 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c303aff3-8e3e-4327-b4d0-9838a3b0373d-catalog-content\") pod \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\" (UID: \"c303aff3-8e3e-4327-b4d0-9838a3b0373d\") " Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:36.982442 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c303aff3-8e3e-4327-b4d0-9838a3b0373d-kube-api-access-wr296" (OuterVolumeSpecName: "kube-api-access-wr296") pod "c303aff3-8e3e-4327-b4d0-9838a3b0373d" (UID: "c303aff3-8e3e-4327-b4d0-9838a3b0373d"). InnerVolumeSpecName "kube-api-access-wr296". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:36.988255 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c303aff3-8e3e-4327-b4d0-9838a3b0373d-utilities" (OuterVolumeSpecName: "utilities") pod "c303aff3-8e3e-4327-b4d0-9838a3b0373d" (UID: "c303aff3-8e3e-4327-b4d0-9838a3b0373d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:36.998168 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c303aff3-8e3e-4327-b4d0-9838a3b0373d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c303aff3-8e3e-4327-b4d0-9838a3b0373d" (UID: "c303aff3-8e3e-4327-b4d0-9838a3b0373d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:37.079142 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c303aff3-8e3e-4327-b4d0-9838a3b0373d-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:37.079181 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr296\" (UniqueName: \"kubernetes.io/projected/c303aff3-8e3e-4327-b4d0-9838a3b0373d-kube-api-access-wr296\") on node \"crc\" DevicePath \"\"" Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:37.079193 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c303aff3-8e3e-4327-b4d0-9838a3b0373d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:37.760806 4675 generic.go:334] "Generic (PLEG): container finished" podID="ff9b3c70-a1a9-46cf-a4c4-aea18618b082" containerID="aa7991e50a32a52900fa9c2bc5e6be2fe29abe716d8f3b10c7b6a78613541b10" exitCode=0 Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:37.760922 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9c8h" event={"ID":"ff9b3c70-a1a9-46cf-a4c4-aea18618b082","Type":"ContainerDied","Data":"aa7991e50a32a52900fa9c2bc5e6be2fe29abe716d8f3b10c7b6a78613541b10"} Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:37.763780 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5zkd" event={"ID":"c303aff3-8e3e-4327-b4d0-9838a3b0373d","Type":"ContainerDied","Data":"2e14b11b2e0cc09a3076e58b116c20af6e3fa9bb76c177c7a80d75e0fa9a8acf"} Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:37.763815 4675 scope.go:117] "RemoveContainer" containerID="36e5f3589f2879deab57fe04e652f190c52faf0905acf7a11868bf451b3689be" Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:37.763921 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5zkd" Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:37.787671 4675 scope.go:117] "RemoveContainer" containerID="9dc07fe5ff06bc0c2e80d4b4f33c242983d2830b60108f6298608dd83ea0ec60" Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:37.805158 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5zkd"] Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:37.814313 4675 scope.go:117] "RemoveContainer" containerID="23612ce3b089605af2e91c1401f835ac12d60f08500a5002c16545ca95bcccb2" Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:37.818714 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5zkd"] Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:38.864219 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c303aff3-8e3e-4327-b4d0-9838a3b0373d" path="/var/lib/kubelet/pods/c303aff3-8e3e-4327-b4d0-9838a3b0373d/volumes" Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:39.501431 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:39 crc kubenswrapper[4675]: I1121 15:25:39.574814 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:40 crc kubenswrapper[4675]: I1121 15:25:40.802854 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9c8h" event={"ID":"ff9b3c70-a1a9-46cf-a4c4-aea18618b082","Type":"ContainerStarted","Data":"73bf9ed938155496536b160d07db97902af5a9e4be0fbb50a1d3750e07fc4dad"} Nov 21 15:25:40 crc kubenswrapper[4675]: I1121 15:25:40.834734 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z9c8h" podStartSLOduration=3.535870033 podStartE2EDuration="13.834715136s" podCreationTimestamp="2025-11-21 15:25:27 +0000 UTC" firstStartedPulling="2025-11-21 15:25:29.66445889 +0000 UTC m=+6806.390873617" lastFinishedPulling="2025-11-21 15:25:39.963303993 +0000 UTC m=+6816.689718720" observedRunningTime="2025-11-21 15:25:40.822689066 +0000 UTC m=+6817.549103803" watchObservedRunningTime="2025-11-21 15:25:40.834715136 +0000 UTC m=+6817.561129853" Nov 21 15:25:41 crc kubenswrapper[4675]: I1121 15:25:41.664559 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nlbwc"] Nov 21 15:25:41 crc kubenswrapper[4675]: I1121 15:25:41.665797 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nlbwc" podUID="32ff446f-445c-45e9-94aa-21868bff8336" containerName="registry-server" containerID="cri-o://f24ff2eec6d88e87855fe0ce09574422414fb6c58d0dcceab751daab9e14f3cd" gracePeriod=2 Nov 21 15:25:41 crc kubenswrapper[4675]: I1121 15:25:41.823266 4675 generic.go:334] "Generic (PLEG): container finished" podID="32ff446f-445c-45e9-94aa-21868bff8336" containerID="f24ff2eec6d88e87855fe0ce09574422414fb6c58d0dcceab751daab9e14f3cd" exitCode=0 Nov 21 15:25:41 crc kubenswrapper[4675]: I1121 15:25:41.823516 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlbwc" event={"ID":"32ff446f-445c-45e9-94aa-21868bff8336","Type":"ContainerDied","Data":"f24ff2eec6d88e87855fe0ce09574422414fb6c58d0dcceab751daab9e14f3cd"} Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.224954 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.301987 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ff446f-445c-45e9-94aa-21868bff8336-utilities\") pod \"32ff446f-445c-45e9-94aa-21868bff8336\" (UID: \"32ff446f-445c-45e9-94aa-21868bff8336\") " Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.302387 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps2t9\" (UniqueName: \"kubernetes.io/projected/32ff446f-445c-45e9-94aa-21868bff8336-kube-api-access-ps2t9\") pod \"32ff446f-445c-45e9-94aa-21868bff8336\" (UID: \"32ff446f-445c-45e9-94aa-21868bff8336\") " Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.302425 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ff446f-445c-45e9-94aa-21868bff8336-catalog-content\") pod \"32ff446f-445c-45e9-94aa-21868bff8336\" (UID: \"32ff446f-445c-45e9-94aa-21868bff8336\") " Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.303347 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ff446f-445c-45e9-94aa-21868bff8336-utilities" (OuterVolumeSpecName: "utilities") pod "32ff446f-445c-45e9-94aa-21868bff8336" (UID: "32ff446f-445c-45e9-94aa-21868bff8336"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.308481 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ff446f-445c-45e9-94aa-21868bff8336-kube-api-access-ps2t9" (OuterVolumeSpecName: "kube-api-access-ps2t9") pod "32ff446f-445c-45e9-94aa-21868bff8336" (UID: "32ff446f-445c-45e9-94aa-21868bff8336"). InnerVolumeSpecName "kube-api-access-ps2t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.373223 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ff446f-445c-45e9-94aa-21868bff8336-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32ff446f-445c-45e9-94aa-21868bff8336" (UID: "32ff446f-445c-45e9-94aa-21868bff8336"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.404989 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps2t9\" (UniqueName: \"kubernetes.io/projected/32ff446f-445c-45e9-94aa-21868bff8336-kube-api-access-ps2t9\") on node \"crc\" DevicePath \"\"" Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.405040 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ff446f-445c-45e9-94aa-21868bff8336-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.405052 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ff446f-445c-45e9-94aa-21868bff8336-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.839030 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlbwc" event={"ID":"32ff446f-445c-45e9-94aa-21868bff8336","Type":"ContainerDied","Data":"10b2024d9441516d28cc9a1819173333719635507788d35fd1d185d088e77910"} Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.839348 4675 scope.go:117] "RemoveContainer" containerID="f24ff2eec6d88e87855fe0ce09574422414fb6c58d0dcceab751daab9e14f3cd" Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.839106 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlbwc" Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.850186 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:25:42 crc kubenswrapper[4675]: E1121 15:25:42.850484 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vnxnx_openshift-machine-config-operator(6db74e00-d40a-442b-b5b0-4d3b28e05178)\"" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.885447 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nlbwc"] Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.885724 4675 scope.go:117] "RemoveContainer" containerID="655a198078fcee85dc97d23034518778f3882968ea00654c9af1a0fdbc5db669" Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.895544 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nlbwc"] Nov 21 15:25:42 crc kubenswrapper[4675]: I1121 15:25:42.908244 4675 scope.go:117] "RemoveContainer" containerID="5b39d5bd710fe6e434f3bf65b1fd53b78674a67dc37b3bbda37f4bc34803c67b" Nov 21 15:25:44 crc kubenswrapper[4675]: I1121 15:25:44.864928 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ff446f-445c-45e9-94aa-21868bff8336" path="/var/lib/kubelet/pods/32ff446f-445c-45e9-94aa-21868bff8336/volumes" Nov 21 15:25:48 crc kubenswrapper[4675]: I1121 15:25:48.017380 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:48 crc kubenswrapper[4675]: I1121 15:25:48.017974 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:48 crc kubenswrapper[4675]: I1121 15:25:48.077522 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:48 crc kubenswrapper[4675]: I1121 15:25:48.967800 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:49 crc kubenswrapper[4675]: I1121 15:25:49.018199 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z9c8h"] Nov 21 15:25:50 crc kubenswrapper[4675]: I1121 15:25:50.936194 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z9c8h" podUID="ff9b3c70-a1a9-46cf-a4c4-aea18618b082" containerName="registry-server" containerID="cri-o://73bf9ed938155496536b160d07db97902af5a9e4be0fbb50a1d3750e07fc4dad" gracePeriod=2 Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.481709 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.537644 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-catalog-content\") pod \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\" (UID: \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\") " Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.537776 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-utilities\") pod \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\" (UID: \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\") " Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.537803 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2s47\" (UniqueName: \"kubernetes.io/projected/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-kube-api-access-g2s47\") pod \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\" (UID: \"ff9b3c70-a1a9-46cf-a4c4-aea18618b082\") " Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.541435 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-utilities" (OuterVolumeSpecName: "utilities") pod "ff9b3c70-a1a9-46cf-a4c4-aea18618b082" (UID: "ff9b3c70-a1a9-46cf-a4c4-aea18618b082"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.547446 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-kube-api-access-g2s47" (OuterVolumeSpecName: "kube-api-access-g2s47") pod "ff9b3c70-a1a9-46cf-a4c4-aea18618b082" (UID: "ff9b3c70-a1a9-46cf-a4c4-aea18618b082"). InnerVolumeSpecName "kube-api-access-g2s47". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.599991 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff9b3c70-a1a9-46cf-a4c4-aea18618b082" (UID: "ff9b3c70-a1a9-46cf-a4c4-aea18618b082"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.640852 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.640898 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.640912 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2s47\" (UniqueName: \"kubernetes.io/projected/ff9b3c70-a1a9-46cf-a4c4-aea18618b082-kube-api-access-g2s47\") on node \"crc\" DevicePath \"\"" Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.951575 4675 generic.go:334] "Generic (PLEG): container finished" podID="ff9b3c70-a1a9-46cf-a4c4-aea18618b082" containerID="73bf9ed938155496536b160d07db97902af5a9e4be0fbb50a1d3750e07fc4dad" exitCode=0 Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.951631 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9c8h" event={"ID":"ff9b3c70-a1a9-46cf-a4c4-aea18618b082","Type":"ContainerDied","Data":"73bf9ed938155496536b160d07db97902af5a9e4be0fbb50a1d3750e07fc4dad"} Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.951664 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z9c8h" event={"ID":"ff9b3c70-a1a9-46cf-a4c4-aea18618b082","Type":"ContainerDied","Data":"f32761888e95dac962e25ff2dfa729014ccd133417fa25d4a10d6274226d2aaa"} Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.951684 4675 scope.go:117] "RemoveContainer" containerID="73bf9ed938155496536b160d07db97902af5a9e4be0fbb50a1d3750e07fc4dad" Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.951736 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z9c8h" Nov 21 15:25:51 crc kubenswrapper[4675]: I1121 15:25:51.984125 4675 scope.go:117] "RemoveContainer" containerID="aa7991e50a32a52900fa9c2bc5e6be2fe29abe716d8f3b10c7b6a78613541b10" Nov 21 15:25:52 crc kubenswrapper[4675]: I1121 15:25:52.008494 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z9c8h"] Nov 21 15:25:52 crc kubenswrapper[4675]: I1121 15:25:52.019527 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z9c8h"] Nov 21 15:25:52 crc kubenswrapper[4675]: I1121 15:25:52.027950 4675 scope.go:117] "RemoveContainer" containerID="dad8ca738be12fb764598c5e3f22c8eb52c1729f38b3a0636b465626c92d1bab" Nov 21 15:25:52 crc kubenswrapper[4675]: I1121 15:25:52.084364 4675 scope.go:117] "RemoveContainer" containerID="73bf9ed938155496536b160d07db97902af5a9e4be0fbb50a1d3750e07fc4dad" Nov 21 15:25:52 crc kubenswrapper[4675]: E1121 15:25:52.084855 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73bf9ed938155496536b160d07db97902af5a9e4be0fbb50a1d3750e07fc4dad\": container with ID starting with 73bf9ed938155496536b160d07db97902af5a9e4be0fbb50a1d3750e07fc4dad not found: ID does not exist" containerID="73bf9ed938155496536b160d07db97902af5a9e4be0fbb50a1d3750e07fc4dad" Nov 21 15:25:52 crc kubenswrapper[4675]: I1121 15:25:52.084899 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73bf9ed938155496536b160d07db97902af5a9e4be0fbb50a1d3750e07fc4dad"} err="failed to get container status \"73bf9ed938155496536b160d07db97902af5a9e4be0fbb50a1d3750e07fc4dad\": rpc error: code = NotFound desc = could not find container \"73bf9ed938155496536b160d07db97902af5a9e4be0fbb50a1d3750e07fc4dad\": container with ID starting with 73bf9ed938155496536b160d07db97902af5a9e4be0fbb50a1d3750e07fc4dad not found: ID does not exist" Nov 21 15:25:52 crc kubenswrapper[4675]: I1121 15:25:52.084929 4675 scope.go:117] "RemoveContainer" containerID="aa7991e50a32a52900fa9c2bc5e6be2fe29abe716d8f3b10c7b6a78613541b10" Nov 21 15:25:52 crc kubenswrapper[4675]: E1121 15:25:52.085313 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa7991e50a32a52900fa9c2bc5e6be2fe29abe716d8f3b10c7b6a78613541b10\": container with ID starting with aa7991e50a32a52900fa9c2bc5e6be2fe29abe716d8f3b10c7b6a78613541b10 not found: ID does not exist" containerID="aa7991e50a32a52900fa9c2bc5e6be2fe29abe716d8f3b10c7b6a78613541b10" Nov 21 15:25:52 crc kubenswrapper[4675]: I1121 15:25:52.085392 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa7991e50a32a52900fa9c2bc5e6be2fe29abe716d8f3b10c7b6a78613541b10"} err="failed to get container status \"aa7991e50a32a52900fa9c2bc5e6be2fe29abe716d8f3b10c7b6a78613541b10\": rpc error: code = NotFound desc = could not find container \"aa7991e50a32a52900fa9c2bc5e6be2fe29abe716d8f3b10c7b6a78613541b10\": container with ID starting with aa7991e50a32a52900fa9c2bc5e6be2fe29abe716d8f3b10c7b6a78613541b10 not found: ID does not exist" Nov 21 15:25:52 crc kubenswrapper[4675]: I1121 15:25:52.085433 4675 scope.go:117] "RemoveContainer" containerID="dad8ca738be12fb764598c5e3f22c8eb52c1729f38b3a0636b465626c92d1bab" Nov 21 15:25:52 crc kubenswrapper[4675]: E1121 15:25:52.086101 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dad8ca738be12fb764598c5e3f22c8eb52c1729f38b3a0636b465626c92d1bab\": container with ID starting with dad8ca738be12fb764598c5e3f22c8eb52c1729f38b3a0636b465626c92d1bab not found: ID does not exist" containerID="dad8ca738be12fb764598c5e3f22c8eb52c1729f38b3a0636b465626c92d1bab" Nov 21 15:25:52 crc kubenswrapper[4675]: I1121 15:25:52.086145 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dad8ca738be12fb764598c5e3f22c8eb52c1729f38b3a0636b465626c92d1bab"} err="failed to get container status \"dad8ca738be12fb764598c5e3f22c8eb52c1729f38b3a0636b465626c92d1bab\": rpc error: code = NotFound desc = could not find container \"dad8ca738be12fb764598c5e3f22c8eb52c1729f38b3a0636b465626c92d1bab\": container with ID starting with dad8ca738be12fb764598c5e3f22c8eb52c1729f38b3a0636b465626c92d1bab not found: ID does not exist" Nov 21 15:25:52 crc kubenswrapper[4675]: I1121 15:25:52.864392 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff9b3c70-a1a9-46cf-a4c4-aea18618b082" path="/var/lib/kubelet/pods/ff9b3c70-a1a9-46cf-a4c4-aea18618b082/volumes" Nov 21 15:25:56 crc kubenswrapper[4675]: I1121 15:25:56.849694 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:25:58 crc kubenswrapper[4675]: I1121 15:25:58.017211 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"63c15ce0b91f96477944466de5bb059ab855d8870dc8aedaa83dd7d53f12b79a"} Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.212686 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tsgxz"] Nov 21 15:27:20 crc kubenswrapper[4675]: E1121 15:27:20.213814 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9b3c70-a1a9-46cf-a4c4-aea18618b082" containerName="extract-utilities" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.213833 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9b3c70-a1a9-46cf-a4c4-aea18618b082" containerName="extract-utilities" Nov 21 15:27:20 crc kubenswrapper[4675]: E1121 15:27:20.213858 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9b3c70-a1a9-46cf-a4c4-aea18618b082" containerName="registry-server" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.213868 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9b3c70-a1a9-46cf-a4c4-aea18618b082" containerName="registry-server" Nov 21 15:27:20 crc kubenswrapper[4675]: E1121 15:27:20.213890 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ff446f-445c-45e9-94aa-21868bff8336" containerName="registry-server" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.213899 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ff446f-445c-45e9-94aa-21868bff8336" containerName="registry-server" Nov 21 15:27:20 crc kubenswrapper[4675]: E1121 15:27:20.213918 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9b3c70-a1a9-46cf-a4c4-aea18618b082" containerName="extract-content" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.213926 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9b3c70-a1a9-46cf-a4c4-aea18618b082" containerName="extract-content" Nov 21 15:27:20 crc kubenswrapper[4675]: E1121 15:27:20.213961 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ff446f-445c-45e9-94aa-21868bff8336" containerName="extract-content" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.213969 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ff446f-445c-45e9-94aa-21868bff8336" containerName="extract-content" Nov 21 15:27:20 crc kubenswrapper[4675]: E1121 15:27:20.213986 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ff446f-445c-45e9-94aa-21868bff8336" containerName="extract-utilities" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.213994 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ff446f-445c-45e9-94aa-21868bff8336" containerName="extract-utilities" Nov 21 15:27:20 crc kubenswrapper[4675]: E1121 15:27:20.214017 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c303aff3-8e3e-4327-b4d0-9838a3b0373d" containerName="extract-utilities" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.214025 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c303aff3-8e3e-4327-b4d0-9838a3b0373d" containerName="extract-utilities" Nov 21 15:27:20 crc kubenswrapper[4675]: E1121 15:27:20.214044 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c303aff3-8e3e-4327-b4d0-9838a3b0373d" containerName="registry-server" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.214054 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c303aff3-8e3e-4327-b4d0-9838a3b0373d" containerName="registry-server" Nov 21 15:27:20 crc kubenswrapper[4675]: E1121 15:27:20.214095 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c303aff3-8e3e-4327-b4d0-9838a3b0373d" containerName="extract-content" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.214103 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c303aff3-8e3e-4327-b4d0-9838a3b0373d" containerName="extract-content" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.214383 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9b3c70-a1a9-46cf-a4c4-aea18618b082" containerName="registry-server" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.214406 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ff446f-445c-45e9-94aa-21868bff8336" containerName="registry-server" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.214424 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c303aff3-8e3e-4327-b4d0-9838a3b0373d" containerName="registry-server" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.218112 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.231698 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tsgxz"] Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.312996 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6gss\" (UniqueName: \"kubernetes.io/projected/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-kube-api-access-f6gss\") pod \"redhat-operators-tsgxz\" (UID: \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\") " pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.313217 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-utilities\") pod \"redhat-operators-tsgxz\" (UID: \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\") " pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.313396 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-catalog-content\") pod \"redhat-operators-tsgxz\" (UID: \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\") " pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.417102 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-catalog-content\") pod \"redhat-operators-tsgxz\" (UID: \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\") " pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.417557 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6gss\" (UniqueName: \"kubernetes.io/projected/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-kube-api-access-f6gss\") pod \"redhat-operators-tsgxz\" (UID: \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\") " pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.417649 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-utilities\") pod \"redhat-operators-tsgxz\" (UID: \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\") " pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.418269 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-utilities\") pod \"redhat-operators-tsgxz\" (UID: \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\") " pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.418339 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-catalog-content\") pod \"redhat-operators-tsgxz\" (UID: \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\") " pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.439184 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6gss\" (UniqueName: \"kubernetes.io/projected/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-kube-api-access-f6gss\") pod \"redhat-operators-tsgxz\" (UID: \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\") " pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:20 crc kubenswrapper[4675]: I1121 15:27:20.552459 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:21 crc kubenswrapper[4675]: I1121 15:27:21.058408 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tsgxz"] Nov 21 15:27:21 crc kubenswrapper[4675]: I1121 15:27:21.967736 4675 generic.go:334] "Generic (PLEG): container finished" podID="8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d" containerID="cca2b6485626f37c0689b4cb90e1d510df7485dfc0692ed629563ab808e66171" exitCode=0 Nov 21 15:27:21 crc kubenswrapper[4675]: I1121 15:27:21.967841 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsgxz" event={"ID":"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d","Type":"ContainerDied","Data":"cca2b6485626f37c0689b4cb90e1d510df7485dfc0692ed629563ab808e66171"} Nov 21 15:27:21 crc kubenswrapper[4675]: I1121 15:27:21.968059 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsgxz" event={"ID":"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d","Type":"ContainerStarted","Data":"4db91a488b21c62f4fc0a0c2e92356b45b7197a05540dfdd7745f4d0c15506b1"} Nov 21 15:27:22 crc kubenswrapper[4675]: I1121 15:27:22.981533 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsgxz" event={"ID":"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d","Type":"ContainerStarted","Data":"7114791587c2ba16c67cd3e913a7c5012206564c6eca37b5bb0050fa2f821b63"} Nov 21 15:27:30 crc kubenswrapper[4675]: I1121 15:27:30.070010 4675 generic.go:334] "Generic (PLEG): container finished" podID="8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d" containerID="7114791587c2ba16c67cd3e913a7c5012206564c6eca37b5bb0050fa2f821b63" exitCode=0 Nov 21 15:27:30 crc kubenswrapper[4675]: I1121 15:27:30.070060 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsgxz" event={"ID":"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d","Type":"ContainerDied","Data":"7114791587c2ba16c67cd3e913a7c5012206564c6eca37b5bb0050fa2f821b63"} Nov 21 15:27:31 crc kubenswrapper[4675]: I1121 15:27:31.082747 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsgxz" event={"ID":"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d","Type":"ContainerStarted","Data":"b1cbe69b17993bc632be895ba6110cca1ec44bef208a3e804cec05fae91cdd2d"} Nov 21 15:27:31 crc kubenswrapper[4675]: I1121 15:27:31.100527 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tsgxz" podStartSLOduration=2.567498524 podStartE2EDuration="11.100504057s" podCreationTimestamp="2025-11-21 15:27:20 +0000 UTC" firstStartedPulling="2025-11-21 15:27:21.971592162 +0000 UTC m=+6918.698006899" lastFinishedPulling="2025-11-21 15:27:30.504597705 +0000 UTC m=+6927.231012432" observedRunningTime="2025-11-21 15:27:31.098421175 +0000 UTC m=+6927.824835902" watchObservedRunningTime="2025-11-21 15:27:31.100504057 +0000 UTC m=+6927.826918784" Nov 21 15:27:40 crc kubenswrapper[4675]: I1121 15:27:40.552875 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:40 crc kubenswrapper[4675]: I1121 15:27:40.554648 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:41 crc kubenswrapper[4675]: I1121 15:27:41.601940 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tsgxz" podUID="8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d" containerName="registry-server" probeResult="failure" output=< Nov 21 15:27:41 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Nov 21 15:27:41 crc kubenswrapper[4675]: > Nov 21 15:27:50 crc kubenswrapper[4675]: I1121 15:27:50.604123 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:50 crc kubenswrapper[4675]: I1121 15:27:50.662696 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:53 crc kubenswrapper[4675]: I1121 15:27:53.730325 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tsgxz"] Nov 21 15:27:53 crc kubenswrapper[4675]: I1121 15:27:53.731727 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tsgxz" podUID="8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d" containerName="registry-server" containerID="cri-o://b1cbe69b17993bc632be895ba6110cca1ec44bef208a3e804cec05fae91cdd2d" gracePeriod=2 Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.270634 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.348122 4675 generic.go:334] "Generic (PLEG): container finished" podID="8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d" containerID="b1cbe69b17993bc632be895ba6110cca1ec44bef208a3e804cec05fae91cdd2d" exitCode=0 Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.348188 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsgxz" event={"ID":"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d","Type":"ContainerDied","Data":"b1cbe69b17993bc632be895ba6110cca1ec44bef208a3e804cec05fae91cdd2d"} Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.348216 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsgxz" event={"ID":"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d","Type":"ContainerDied","Data":"4db91a488b21c62f4fc0a0c2e92356b45b7197a05540dfdd7745f4d0c15506b1"} Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.348227 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsgxz" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.348237 4675 scope.go:117] "RemoveContainer" containerID="b1cbe69b17993bc632be895ba6110cca1ec44bef208a3e804cec05fae91cdd2d" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.395673 4675 scope.go:117] "RemoveContainer" containerID="7114791587c2ba16c67cd3e913a7c5012206564c6eca37b5bb0050fa2f821b63" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.402187 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6gss\" (UniqueName: \"kubernetes.io/projected/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-kube-api-access-f6gss\") pod \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\" (UID: \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\") " Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.402236 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-utilities\") pod \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\" (UID: \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\") " Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.402958 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-catalog-content\") pod \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\" (UID: \"8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d\") " Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.403591 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-utilities" (OuterVolumeSpecName: "utilities") pod "8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d" (UID: "8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.404341 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.409252 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-kube-api-access-f6gss" (OuterVolumeSpecName: "kube-api-access-f6gss") pod "8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d" (UID: "8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d"). InnerVolumeSpecName "kube-api-access-f6gss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.419387 4675 scope.go:117] "RemoveContainer" containerID="cca2b6485626f37c0689b4cb90e1d510df7485dfc0692ed629563ab808e66171" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.506365 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6gss\" (UniqueName: \"kubernetes.io/projected/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-kube-api-access-f6gss\") on node \"crc\" DevicePath \"\"" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.509609 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d" (UID: "8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.531373 4675 scope.go:117] "RemoveContainer" containerID="b1cbe69b17993bc632be895ba6110cca1ec44bef208a3e804cec05fae91cdd2d" Nov 21 15:27:54 crc kubenswrapper[4675]: E1121 15:27:54.531819 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1cbe69b17993bc632be895ba6110cca1ec44bef208a3e804cec05fae91cdd2d\": container with ID starting with b1cbe69b17993bc632be895ba6110cca1ec44bef208a3e804cec05fae91cdd2d not found: ID does not exist" containerID="b1cbe69b17993bc632be895ba6110cca1ec44bef208a3e804cec05fae91cdd2d" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.531891 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1cbe69b17993bc632be895ba6110cca1ec44bef208a3e804cec05fae91cdd2d"} err="failed to get container status \"b1cbe69b17993bc632be895ba6110cca1ec44bef208a3e804cec05fae91cdd2d\": rpc error: code = NotFound desc = could not find container \"b1cbe69b17993bc632be895ba6110cca1ec44bef208a3e804cec05fae91cdd2d\": container with ID starting with b1cbe69b17993bc632be895ba6110cca1ec44bef208a3e804cec05fae91cdd2d not found: ID does not exist" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.531935 4675 scope.go:117] "RemoveContainer" containerID="7114791587c2ba16c67cd3e913a7c5012206564c6eca37b5bb0050fa2f821b63" Nov 21 15:27:54 crc kubenswrapper[4675]: E1121 15:27:54.532300 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7114791587c2ba16c67cd3e913a7c5012206564c6eca37b5bb0050fa2f821b63\": container with ID starting with 7114791587c2ba16c67cd3e913a7c5012206564c6eca37b5bb0050fa2f821b63 not found: ID does not exist" containerID="7114791587c2ba16c67cd3e913a7c5012206564c6eca37b5bb0050fa2f821b63" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.532349 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7114791587c2ba16c67cd3e913a7c5012206564c6eca37b5bb0050fa2f821b63"} err="failed to get container status \"7114791587c2ba16c67cd3e913a7c5012206564c6eca37b5bb0050fa2f821b63\": rpc error: code = NotFound desc = could not find container \"7114791587c2ba16c67cd3e913a7c5012206564c6eca37b5bb0050fa2f821b63\": container with ID starting with 7114791587c2ba16c67cd3e913a7c5012206564c6eca37b5bb0050fa2f821b63 not found: ID does not exist" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.532376 4675 scope.go:117] "RemoveContainer" containerID="cca2b6485626f37c0689b4cb90e1d510df7485dfc0692ed629563ab808e66171" Nov 21 15:27:54 crc kubenswrapper[4675]: E1121 15:27:54.532979 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cca2b6485626f37c0689b4cb90e1d510df7485dfc0692ed629563ab808e66171\": container with ID starting with cca2b6485626f37c0689b4cb90e1d510df7485dfc0692ed629563ab808e66171 not found: ID does not exist" containerID="cca2b6485626f37c0689b4cb90e1d510df7485dfc0692ed629563ab808e66171" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.533009 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cca2b6485626f37c0689b4cb90e1d510df7485dfc0692ed629563ab808e66171"} err="failed to get container status \"cca2b6485626f37c0689b4cb90e1d510df7485dfc0692ed629563ab808e66171\": rpc error: code = NotFound desc = could not find container \"cca2b6485626f37c0689b4cb90e1d510df7485dfc0692ed629563ab808e66171\": container with ID starting with cca2b6485626f37c0689b4cb90e1d510df7485dfc0692ed629563ab808e66171 not found: ID does not exist" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.609931 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.692853 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tsgxz"] Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.710570 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tsgxz"] Nov 21 15:27:54 crc kubenswrapper[4675]: I1121 15:27:54.861889 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d" path="/var/lib/kubelet/pods/8cfb653a-6e87-4adb-a5d2-19cd8ad1e33d/volumes" Nov 21 15:28:16 crc kubenswrapper[4675]: I1121 15:28:16.136058 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:28:16 crc kubenswrapper[4675]: I1121 15:28:16.136644 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:28:46 crc kubenswrapper[4675]: I1121 15:28:46.135993 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:28:46 crc kubenswrapper[4675]: I1121 15:28:46.136806 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:29:16 crc kubenswrapper[4675]: I1121 15:29:16.136689 4675 patch_prober.go:28] interesting pod/machine-config-daemon-vnxnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:29:16 crc kubenswrapper[4675]: I1121 15:29:16.137342 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:29:16 crc kubenswrapper[4675]: I1121 15:29:16.137404 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" Nov 21 15:29:16 crc kubenswrapper[4675]: I1121 15:29:16.138455 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63c15ce0b91f96477944466de5bb059ab855d8870dc8aedaa83dd7d53f12b79a"} pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:29:16 crc kubenswrapper[4675]: I1121 15:29:16.138532 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" podUID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerName="machine-config-daemon" containerID="cri-o://63c15ce0b91f96477944466de5bb059ab855d8870dc8aedaa83dd7d53f12b79a" gracePeriod=600 Nov 21 15:29:16 crc kubenswrapper[4675]: I1121 15:29:16.309980 4675 generic.go:334] "Generic (PLEG): container finished" podID="6db74e00-d40a-442b-b5b0-4d3b28e05178" containerID="63c15ce0b91f96477944466de5bb059ab855d8870dc8aedaa83dd7d53f12b79a" exitCode=0 Nov 21 15:29:16 crc kubenswrapper[4675]: I1121 15:29:16.310059 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerDied","Data":"63c15ce0b91f96477944466de5bb059ab855d8870dc8aedaa83dd7d53f12b79a"} Nov 21 15:29:16 crc kubenswrapper[4675]: I1121 15:29:16.310313 4675 scope.go:117] "RemoveContainer" containerID="5eb8fcece7f06de6259e35d61d2298e3004a3f6f4251f07c23eb5bd8a4aa08bf" Nov 21 15:29:17 crc kubenswrapper[4675]: I1121 15:29:17.326540 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vnxnx" event={"ID":"6db74e00-d40a-442b-b5b0-4d3b28e05178","Type":"ContainerStarted","Data":"b72efa5dd87fc4f38239386c63d6304e8b4228f89d28518c7032e9c8bb4e25c5"}